Sample records for multiple cross-layer design

Full Text Available In this paper, we developed a cross-layerdesign for two-way relaying (TWR networks with multiple antennas, where two single antenna source nodes exchange information with the aid of one multiple antenna relay node. The proposed cross-layerdesign considers adaptive modulation (AM and space-time block coding (STBC at the physical layer with an automatic repeat request (ARQ protocol at the data link layer, in order to maximize the spectral efficiency under specific delay and packet error ratio (PER constraints. An MMSE-interference cancellation (IC receiver is employed at the relay node, to remove the interference in the fist phase of the TWR transmission. The transmission mode is updated for each phase of the TWR transmission on a frame-by-frame basis, to match the time-varying channel conditions and exploit the system performance and throughput gain. Simulation results show that retransmission at the data link layer could alleviate rigorous error-control requirements at the physical layer, and thereby allows higher data transmission. As a result, cross-layerdesign helps to achieve considerable system spectral efficiency gain for TWR networks, compared to those without cross-layerdesign.

Full Text Available By combining adaptive modulation (AM and automatic repeat request (ARQ protocol as well as user scheduling, the cross-layerdesign scheme of multiuser MIMO system with imperfect feedback is presented, and multiple outdated estimates method is proposed to improve the system performance. Based on this method and imperfect feedback information, the closed-form expressions of spectral efficiency (SE and packet error rate (PER of the system subject to the target PER constraint are respectively derived. With these expressions, the system performance can be effectively evaluated. To mitigate the effect of delayed feedback, the variable thresholds (VTs are also derived by means of the maximum a posteriori method, and these VTs include the conventional fixed thresholds (FTs as special cases. Simulation results show that the theoretical SE and PER are in good agreement with the corresponding simulation. The proposed CLD scheme with multiple estimates can obtain higher SE than the existing CLD scheme with single estimate, especially for large delay. Moreover, the CLD scheme with VTs outperforms that with conventional FTs.

Optical networks have become an integral part of the communications infrastructure needed to support society’s demand for high-speed connectivity. Cross-LayerDesign in Optical Networks addresses topics in optical network design and analysis with a focus on physical-layer impairment awareness and network layer service requirements, essential for the implementation and management of robust scalable networks. The cross-layer treatment includes bottom-up impacts of the physical and lambda layers, such as dispersion, noise, nonlinearity, crosstalk, dense wavelength packing, and wavelength line rates, as well as top-down approaches to handle physical-layer impairments and service requirements.

Full Text Available A new orthogonal frequency division multiplexing (OFDM system embedded with overlay watermarks for location-aware cross-layerdesign is proposed in this paper. One major advantage of the proposed system is the multiple functionalities the overlay watermark provides, which includes a cross-layer signaling interface, a transceiver identification for position-aware routing, as well as its basic role as a training sequence for channel estimation. Wireless terminals are typically battery powered and have limited wireless communication bandwidth. Therefore, efficient collaborative signal processing algorithms that consume less energy for computation and less bandwidth for communication are needed. Transceiver aware of its location can also improve the routing efficiency by selective flooding or selective forwarding data only in the desired direction, since in most cases the location of a wireless host is unknown. In the proposed OFDM system, location information of a mobile for efficient routing can be easily derived when a unique watermark is associated with each individual transceiver. In addition, cross-layer signaling and other interlayer interactive information can be exchanged with a new data pipe created by modulating the overlay watermarks. We also study the channel estimation and watermark removal techniques at the physical layer for the proposed overlay OFDM. Our channel estimator iteratively estimates the channel impulse response and the combined signal vector from the overlay OFDM signal. Cross-layerdesign that leads to low-power consumption and more efficient routing is investigated.

We propose a crosslayerdesign that optimizes the energy efficiency of spectrum sharing systems. The energy per good bit (EPG) is considered as an energy efficiency metric. We optimize the secondary user's transmission power and media access frame

the traditional protocol stack design methodology. However, CrossLayer also carries a risk due to possibly unexpected and undesired effects. In this chapter we want to provide architecture designers with a set of tools and recommendations synthesized from an analysis of the state of art, but enriched...

The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.

Full Text Available We consider a dynamic spectrum sharing system consisting of a primary user, whose licensed spectrum is allowed to be accessed by a secondary user as long as it does not violate the prescribed interference limit inflicted on the primary user. Assuming the Nakagami- block-fading environment, we aim at maximizing the performance of secondary user's link in terms of average spectral efficiency (ASE and error performance under the specified packet error rate (PER and average interference limit constraints. To this end, we employ a cross-layerdesign policy which combines adaptive power and coded discrete M-QAM modulation scheme at the physical layer with a truncated automatic repeat request (ARQ protocol at the data link layer, and simultaneously satisfies the aforementioned constraints. Numerical results affirm that the secondary link of spectrum sharing system combining ARQ with adaptive modulation and coding (AMC achieves significant gain in ASE depending on the maximum number of retransmissions initiated by the ARQ protocol. The results further indicate that the ARQ protocol essentially improves the packet loss rate performance of the secondary link.

Full Text Available In mobile ad hoc networks, communication among mobile nodes occurs through wireless medium The design of ad hoc network protocol, generally based on a traditional “layered approach”, has been found ineffective to deal with receiving signal strength (RSS-related problems, affecting the physical layer, the network layer and transport layer. This paper proposes a design approach, deviating from the traditional network design, toward enhancing the cross-layer interaction among different layers, namely physical, MAC and network. The Cross-Layerdesign approach for Power control (CLPC would help to enhance the transmission power by averaging the RSS values and to find an effective route between the source and the destination. This cross-layerdesign approach was tested by simulation (NS2 simulator and its performance over AODV was found to be better.

The metrics of quality of service (QoS) for each sensor type in a wireless sensor network can be associated with metrics for multimedia that describe the quality of fused information, e.g., throughput, delay, jitter, packet error rate, information correlation, etc. These QoS metrics are typically set at the highest, or application, layer of the protocol stack to ensure that performance requirements for each type of sensor data are satisfied. Application-layer metrics, in turn, depend on the support of the lower protocol layers: session, transport, network, data link (MAC), and physical. The dependencies of the QoS metrics on the performance of the higher layers of the Open System Interconnection (OSI) reference model of the WSN protocol, together with that of the lower three layers, are the basis for a comprehensive approach to QoS optimization for multiple sensor types in a general WSN model. The cross-layerdesign accounts for the distributed power consumption along energy-constrained routes and their constituent nodes. Following the author's previous work, the cross-layer interactions in the WSN protocol are represented by a set of concatenated protocol parameters and enabling resource levels. The "best" cross-layerdesigns to achieve optimal QoS are established by applying the general theory of martingale representations to the parameterized multivariate point processes (MVPPs) for discrete random events occurring in the WSN. Adaptive control of network behavior through the cross-layerdesign is realized through the parametric factorization of the stochastic conditional rates of the MVPPs. The cross-layer protocol parameters for optimal QoS are determined in terms of solutions to stochastic dynamic programming conditions derived from models of transient flows for heterogeneous sensor data and aggregate information over a finite time horizon. Markov state processes, embedded within the complex combinatorial history of WSN events, are more computationally

Full Text Available In future mobile networks, different radio access technologies will have to coexist. IEEE 802.21 MIH (Media-Independent Handover provides primitive mechanisms that ease the implementation of a seamless vertical handover (inter-RAT handover between different radio access technologies. However, it does not specify any handover execution mechanism. The first objective of this paper is to propose a novel MIHF (Media-Independent Handover Function variant, which is renamed interworking (IW sublayer. IW sublayer provides a seamless inter-RAT handover procedure between UMTS and WiMAX systems. It relies on a new intersystem retransmission mechanism with cross-layer interaction ability providing lossless handover while keeping acceptable delays. The second objective of this paper is to design a new TCP snoop agent (TCP Snoop, which interacts with the IW layer in order to mitigate BDP (Bandwidth Delay Product mismatch and to solve spurious RTO (Retransmission TimeOut problems. The cross-layer effects on the handover performance are evaluated by simulations. Our results show that cross-layer interaction between IW layer and TCP Snoop smoothes the handover procedure for TCP traffics. Additionally, this novel inter-RAT cross-layer scheme has the merit of keeping existing TCP protocol stacks unchanged.

Full Text Available Cross-layerdesign has been used in streaming video over the wireless channels to optimize the overall system performance. In this paper, we extend our previous work on joint design of source rate control and congestion control for video streaming over the wired channel, and propose a cross-layerdesign approach for wireless video streaming. First, we extend the QoS-aware congestion control mechanism (TFRCC proposed in our previous work to the wireless scenario, and provide a detailed discussion about how to enhance the overall performance in terms of rate smoothness and responsiveness of the transport protocol. Then, we extend our previous joint design work to the wireless scenario, and a thorough performance evaluation is conducted to investigate its performance. Simulation results show that by cross-layerdesign of source rate control at application layer and congestion control at transport layer, and by taking advantage of the MAC layer information, our approach can avoid the throughput degradation caused by wireless link error, and better support the QoS requirements of the application. Thus, the playback quality is significantly improved, while good performance of the transport protocol is still preserved.

We present a cross-layer optimized video rate adaptation and user scheduling scheme for multi-user wireless video streaming aiming for maximum quality of service (QoS) for each user,, maximum system video throughput, and QoS fairness among users. These objectives are jointly optimized using a

Full Text Available A cellular CDMA network with voice and data communications is considered. Focusing on the downlink direction, we seek for the overall performance improvement which can be achieved by cross-layer analysis and design, taking physical layer, link layer, network layer, and transport layer into account. We are concerned with the role of each single layer as well as the interaction among layers, and propose algorithms/schemes accordingly to improve the system performance. These proposals include adaptive scheduling for link layer, priority-based handoff strategy for network admission control, and an algorithm for the avoidance of TCP spurious timeouts at the transport layer. Numerical results show the performance gain of each proposed scheme over independent performance of an individual layer in the wireless mobile network. We conclude that the system performance in terms of capacity, throughput, dropping probability, outage, power efficiency, delay, and fairness can be enhanced by jointly considering the interactions across layers.

Full Text Available By combining adaptive modulation and automatic repeat request, a cross-layerdesign (CLD scheme for MIMO system with antenna selection (AS and imperfect feedback is presented, and the corresponding performance is studied. Subject to a target packet loss rate and fixed power constraint, the variable switching thresholds of fading gain are derived. According to these results, and using mathematical manipulation, the average spectrum efficiency (SE and packet error rate (PER of the system are further derived. As a result, closed-form expressions of the average SE and PER are obtained, respectively. These expressions include the expressions under perfect channel state information as special cases and provide good performance evaluation for the system. Numerical results show that the proposed CLD scheme with antenna selection has higher SE than the existing CLD scheme with space-time block coding, and the CLD scheme with variable switching thresholds outperforms that with conventional-fixed switching thresholds.

A wireless ad hoc sensor network is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. The nodes are severely resource-constrained, with limited processing, memory and power capacities and must operate cooperatively to fulfill a common mission in typically unattended modes. In a wireless sensor network (WSN), each sensor at a node can observe locally some underlying physical phenomenon and sends a quantized version of the observation to sink (destination) nodes via wireless links. Since the wireless medium can be easily eavesdropped, links can be compromised by intrusion attacks from nodes that may mount denial-of-service attacks or insert spurious information into routing packets, leading to routing loops, long timeouts, impersonation, and node exhaustion. A cross-layerdesign based on protocol-layer interactions is proposed for detection and identification of various intrusion attacks on WSN operation. A feature set is formed from selected cross-layer parameters of the WSN protocol to detect and identify security threats due to intrusion attacks. A separate protocol is not constructed from the cross-layerdesign; instead, security attributes and quantified trust levels at and among nodes established during data exchanges complement customary WSN metrics of energy usage, reliability, route availability, and end-to-end quality-of-service (QoS) provisioning. Statistical pattern recognition algorithms are applied that use observed feature-set patterns observed during network operations, viewed as security audit logs. These algorithms provide the "best" network global performance in the presence of various intrusion attacks. A set of mobile (software) agents distributed at the nodes implement the algorithms, by moving among the layers involved in the network response at each active node

High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a CrossLayerDesign Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized crosslayer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

Full Text Available In recent years, Wireless Mesh Networks (WMNs technologies have received significant attentions. WMNs not only accede to the advantages of ad hoc networks but also provide hierarchical multi-interface architecture. Transmission power control and routing path selections are critical issues in the past researches of multihop networks. Variable transmission power levels lead to different network connectivity and interference. Further, routing path selections among different radio interfaces will also produce different intra-/interflow interference. These features tightly affect the network performance. Most of the related works on the routing protocol design do not consider transmission power control and multi-interface environment simultaneously. In this paper, we proposed a cross-layer routing protocol called M2iRi2 which coordinates transmission power control and intra-/interflow interference considerations as routing metrics. Each radio interface calculates the potential tolerable-added transmission interference in the physical layer. When the route discovery starts, the M2iRi2 will adopt the appropriate power level to evaluate each interface quality along paths. The simulation results demonstrate that our design can enhance both network throughput and end-to-end delay.

In this paper, we propose a cross-layerdesign, which optimizes the energy efficiency of a potential future 5G spectrum-sharing environment, in two sharing scenarios. In the first scenario, underlying sharing is considered. We propose and minimize a modified energy per good bit (MEPG) metric, with respect to the spectrum sharing user’s transmission power and media access frame length. The cellular users, legacy users, are protected by an outage probability constraint. To optimize the non-convex targeted problem, we utilize the generalized convexity theory and verify the problem’s strictly pseudoconvex structure. We also derive analytical expressions of the optimal resources. In the second scenario, we minimize a generalized MEPG function while considering a probabilistic activity of cellular users and its impact on the MEPG performance of the spectrum sharing users. Finally, we derive the associated optimal resource allocation of this problem. Selected numerical results show the improvement of the proposed system compared with other systems.

In this paper, we investigate the cross-layerdesign of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

Full Text Available In this paper, we investigate the cross-layerdesign of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP, which can be solved by standard linear programming (LP method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

Full Text Available Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods.

Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods.

In this paper, the cross-layer security problem of cyber-physical system (CPS) is investigated from the game-theoretic perspective. Physical dynamics of plant is captured by stochastic differential game with cyber-physical influence being considered. The sufficient and necessary condition for the existence of state-feedback equilibrium strategies is given. The attack-defence cyber interactions are formulated by a Stackelberg game intertwined with stochastic differential game in physical layer. The condition such that the Stackelberg equilibrium being unique and the corresponding analytical solutions are both provided. An algorithm is proposed for obtaining hierarchical security strategy by solving coupled games, which ensures the operational normalcy and cyber security of CPS subject to uncertain disturbance and unexpected cyberattacks. Simulation results are given to show the effectiveness and performance of the proposed algorithm.

In this thesis, we investigate two different system infrastructures in underlay cognitive radio network, in which two popular techniques, cross-layerdesign and cooperative communication, are considered, respectively. In particular, we introduce the Aggressive Adaptive Modulation and Coding (A-AMC) into the cross-layerdesign and achieve the optimal boundary points in closed form to choose the AMC and A-AMC transmission modes by taking into account the Channel State Information (CSI) from the secondary transmitter to both the primary receiver and the secondary receiver. What’s more, for the cooperative communication design, we consider three different relay selection schemes: Partial Relay Selection, Opportunistic Relay Selection and Threshold Relay Selection. The Probability Density Functions (PDFs) of the Signal-to- Noise Ratio (SNR) in each hop for different selection schemes are provided, and then the exact closed-form expressions for the end-to-end packet loss rate in the secondary link considering the cooperation of the Decode-and-Forward (DF) relay for different relay selection schemes are derived.

Full Text Available There are many technical challenges for designing large-scale underwater sensor networks, especially the sensor node localization. Although many papers studied for large-scale sensor node localization, previous studies mainly study the location algorithm without the crosslayerdesign for localization. In this paper, by utilizing the network hierarchical structure of underwater sensor networks, we propose a new large-scale underwater acoustic localization scheme based on crosslayerdesign. In this scheme, localization is performed in a hierarchical way, and the whole localization process focused on the physical layer, data link layer and application layer. We increase the pipeline parameters which matched the acoustic channel, added in MAC protocol to increase the authenticity of the large-scale underwater sensor networks, and made analysis of different location algorithm. We conduct extensive simulations, and our results show that MAC layer protocol and the localization algorithm all would affect the result of localization which can balance the trade-off between localization accuracy, localization coverage, and communication cost.

Full Text Available A hierarchical cross-layerdesign approach is proposed to increase energy efficiency in ad hoc networks through joint adaptation of nodes' transmitting powers and route selection. The design maintains the advantages of the classic OSI model, while accounting for the cross-coupling between layers, through information sharing. The proposed joint power control and routing algorithm is shown to increase significantly the overall energy efficiency of the network, at the expense of a moderate increase in complexity. Performance enhancement of the joint design using multiuser detection is also investigated, and it is shown that the use of multiuser detection can increase the capacity of the ad hoc network significantly for a given level of energy consumption.

Full Text Available A hierarchical cross-layerdesign approach is proposed to increase energy efficiency in ad hoc networks through joint adaptation of nodes' transmitting powers and route selection. The design maintains the advantages of the classic OSI model, while accounting for the cross-coupling between layers, through information sharing. The proposed joint power control and routing algorithm is shown to increase significantly the overall energy efficiency of the network, at the expense of a moderate increase in complexity. Performance enhancement of the joint design using multiuser detection is also investigated, and it is shown that the use of multiuser detection can increase the capacity of the ad hoc network significantly for a given level of energy consumption.

Full Text Available This paper presents a cross-layer framework in order to design and optimize energy-efficient cache memories made of deeply-scaled FinFET devices. The proposed design framework spans device, circuit and architecture levels and considers both super- and near-threshold modes of operation. Initially, at the device-level, seven FinFET devices on a 7-nm process technology are designed in which only one geometry-related parameter (e.g., fin width, gate length, gate underlap is changed per device. Next, at the circuit-level, standard 6T and 8T SRAM cells made of these 7-nm FinFET devices are characterized and compared in terms of static noise margin, access latency, leakage power consumption, etc. Finally, cache memories with all different combinations of devices and SRAM cells are evaluated at the architecture-level using a modified version of the CACTI tool with FinFET support and other considerations for deeply-scaled technologies. Using this design framework, it is observed that L1 cache memory made of longer channel FinFET devices operating at the near-threshold regime achieves the minimum energy operation point.

Full Text Available Abstract The orthogonal frequency-division multiple access (OFDMA system has the advantages of flexible subcarrier allocation and adaptive modulation with respect to channel conditions. However, transmission overhead is required in each frame to broadcast the arrangement of radio resources to all mobile stations within the coverage of the same base station. This overhead greatly affects the utilization of valuable radio resources. In this paper, a crosslayer scheme is proposed to reduce the number of traffic bursts at the downlink of an OFDMA wireless access network so that the overhead of the media access protocol (MAP field can be minimized. The proposed scheme considers the priorities and the channel conditions of quality of service (QoS traffic streams to arrange for them to be sent with minimum bursts in a heuristic manner. In addition, the trade-off between the degradation of the modulation level and the reduction of traffic bursts is investigated. Simulation results show that the proposed scheme can effectively reduce the traffic bursts and, therefore, increase resource utilization.

Full Text Available The issues inherent in caring for an ever-increasing aged population has been the subject of endless debate and continues to be a hot topic for political discussion. The use of hospital-based facilities for the monitoring of chronic physiological conditions is expensive and ties up key healthcare professionals. The introduction of wireless sensor devices as part of a Wireless Body Area Network (WBAN integrated within an overall eHealth solution could bring a step change in the remote management of patient healthcare. Sensor devices small enough to be placed either inside or on the human body can form a vital part of an overall health monitoring network. An effectively designed energy efficient WBAN should have a minimal impact on the mobility and lifestyle of the patient. WBAN technology can be deployed within a hospital, care home environment or in the patient’s own home. This study is a review of the existing research in the area of WBAN technology and in particular protocol adaptation and energy efficient cross-layerdesign. The research reviews the work carried out across various layers of the protocol stack and highlights how the latest research proposes to resolve the various challenges inherent in remote continual healthcare monitoring.

Full Text Available Abstract This article proposes an integrated framework for adaptive QoS provision in IEEE 802.16e broadband wireless access networks based on cross-layerdesign. On one hand, an efficient admission control (AC algorithm is proposed along with a semi-reservation scheme to guarantee the connection-level QoS. First, to guarantee the service continuity for handoff connections and resource efficiency, our semi-reservation scheme considers both users' handoff probability and average resource consumption together, which effectively avoids resource over-reservation and insufficient reservation. For AC, a new/handoff connection is accepted only when the target cell has enough resource to afford both instantaneous and average resource consumption to meet the average source rate request. On the other hand, a joint resource allocation and packet scheduling scheme is designed to provide packet-level QoS guarantee in term of "QoS rate", which can ensure fairness for the services with identical priority level in case of bandwidth shortage. Particularly, an enhanced bandwidth request scheme is designed to reduce unnecessary BR delay and redundant signaling overhead caused by the existing one in IEEE 802.16e, which further improves the packet-level QoS performance and resource efficiency for uplink transmission. Simulation results show that the proposed approach not only balances the tradeoff among connection blocking rate, connection dropping rate, and connection failure rate, but also achieves low mean packet dropping rate (PDR, small deviation of PDR, and low QoS outage rate. Moreover, high resource efficiency is ensured.

Full Text Available A novel admission control (AC policy is proposed for the uplink of a cellular CDMA beamforming system. An approximated power control feasibility condition (PCFC, required by a cross-layer AC policy, is derived. This approximation, however, increases outage probability in the physical layer. A truncated automatic retransmission request (ARQ scheme is then employed to mitigate the outage problem. In this paper, we investigate the joint design of an AC policy and an ARQ-based outage mitigation algorithm in a cross-layer context. This paper provides a framework for joint AC design among physical, data-link, and network layers. This enables multiple quality-of-service (QoS requirements to be more flexibly used to optimize system performance. Numerical examples show that by appropriately choosing ARQ parameters, the proposed AC policy can achieve a significant performance gain in terms of reduced outage probability and increased system throughput, while simultaneously guaranteeing all the QoS requirements.

IEEE 802.16m is an advanced air interface standard which is under development for IMT-Advanced systems, known as 4G systems. IEEE 802.16m is designed to provide a high data rate and a Quality of Service (QoS) level in order to meet user service requirements, and is especially suitable for mobilized environments. There are several factors that have great impact on such requirements. As one of the major factors, we mainly focus on latency issues. In IEEE 802.16m, an enhanced layer 2 handover scheme, described as Entry Before Break (EBB) was proposed and adopted to reduce handover latency. EBB provides significant handover interruption time reduction with respect to the legacy IEEE 802.16 handover scheme. Fast handovers for mobile IPv6 (FMIPv6) was standardized by Internet Engineering Task Force (IETF) in order to provide reduced handover interruption time from IP layer perspective. Since FMIPv6 utilizes link layer triggers to reduce handover latency, it is very critical to jointly design FMIPv6 with its underlying link layer protocol. However, FMIPv6 based on new handover scheme, EBB has not been proposed. In this paper, we propose an improved cross-layeringdesign for FMIPv6 based on the IEEE 802.16m EBB handover. In comparison with the conventional FMIPv6 based on the legacy IEEE 802.16 network, the overall handover interruption time can be significantly reduced by employing the proposed design. Benefits of this improvement on latency reduction for mobile user applications are thoroughly investigated with both numerical analysis and simulation on various IP applications.

In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layerdesign is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layerdesigns of sensor network protocols. Functional dependencies of WSN performance metrics are described in

In this dissertation, we present novel physical (PHY) and cross-layerdesign guidelines and resource adaptation algorithms to improve the security and user experience in the future wireless networks. Physical and cross-layer wireless security measures can provide stronger overall security with high efficiency and can also provide better…

. This paper evaluates a crosslayer scheduling algorithm that aims at minimizing the resource utilization. The algorithm makes decisions regarding the channel conditions and the size of transmission buffers and different QoS demands. The simulation results show that the new algorithm improves the resource...

To meet the wireless network congestion control problem, we give a definition of congestion degree classification and propose a mechanism of directed cooperative path net, guided by the wireless network’s cross-layerdesign methods and node cooperation principles. Considering the virtual collision and “starved” phenomenon in congested networks, the QRD mechanism and channel competition mechanism QPCG are proposed, with introducing the game theory into the cross-layerdesign. Simulation result...

Full Text Available We present Miracle, a novel framework which extends ns2 to facilitate the simulation and the design of beyond 4G networks. Miracle enhances ns2 by providing an efficient and embedded engine for handling cross-layer messages and, at the same time, enabling the coexistence of multiple modules within each layer of the protocol stack. We also present a novel framework developed as an extension of Miracle called Miracle PHY and MAC. This framework facilitates the development of more realistic Channel, PHY and MAC modules, considering features currently lacking in most state-of-the-art simulators, while at the same time giving a strong emphasis on code modularity, interoperability and reusability. Finally, we provide an overview of the wireless technologies implemented in Miracle, discussing in particular the models for the IEEE 802.11, UMTS and WiMAX standards and for Underwater Acoustic Networks. We observe that, thanks to Miracle and its extensions, it is possible to carefully simulate complex network architectures at all the OSI layers, from the physical reception model to standard applications and system management schemes. This allows to have a comprehensive view of all the interactions among network components, which play an important role in many research areas, such as cognitive networking and cross-layerdesign.

There is a trend towards using wireless technologies in networked control systems. However, the adverse properties of the radio channels make it difficult to design and implement control systems in wireless environments. To attack the uncertainty in available communication resources in wireless control systems closed over WLAN, a cross-layer adaptive feedback scheduling (CLAFS) scheme is developed, which takes advantage of the co-design of control and wireless communications. By exploiting cross-layerdesign, CLAFS adjusts the sampling periods of control systems at the application layer based on information about deadline miss ratio and transmission rate from the physical layer. Within the framework of feedback scheduling, the control performance is maximized through controlling the deadline miss ratio. Key design parameters of the feedback scheduler are adapted to dynamic changes in the channel condition. An event-driven invocation mechanism for the feedback scheduler is also developed. Simulation results show that the proposed approach is efficient in dealing with channel capacity variations and noise interference, thus providing an enabling technology for control over WLAN. PMID:27879934

This paper introduces an innovative QoS provisioning scheme in home networks, by utilizing Optical Wireless (OW) MAC specification proposed by Home Gigabit Access Project (OMEGA) project. The specification is characterized by its resource reservation protocol and the use of Time Division Multiple...... Access (TDMA). By adopting OW MAC to the widely supported Universal Plug and Play – Quality of Service (UPnP-QoS) Architecture in the simulated home domain, algorithms for crosslayer mapping QoS requirements are proposed. Compared to utilizing WLAN MAC, our scheme is able to provide guaranteed Qo......S levels to streams with different priorities, especially to delay-sensitive services. Efficiencies of the algorithms and network performance are validated by analyzing the results collected from OPNET simulation models....

In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layerdesign optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design. PMID:22163743

Full Text Available In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layerdesign optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design.

For a cognitive radio relaying network, we propose a cross-layerdesign by combining information-guided transmission at the physical layer and network coding at the network layer. With this design, a common relay is exploited to help

This paper aims at synchronisation between the sensor nodes. Indeed, in the context of wireless sensor networks, it is necessary to take into consideration the energy cost induced by the synchronisation, which can represent the majority of the energy consumed. On communication, an already identified hard point consists in imagining a fine synchronisation protocol which must be sufficiently robust to the intermittent energy in the sensors. Hence, this paper worked on aspects of performance and energy saving, in particular on the optimisation of the synchronisation protocol using cross-layerdesign method such as synchronisation between layers. Our approach consists in balancing the energy consumption between the sensors and choosing the cluster head with the highest residual energy in order to guarantee the reliability, integrity and continuity of communication (i.e. maximising the network lifetime).

monitoring capabilities to include real-time monitoring of OSNR and polarization mode dispersion (PMD) to enable dynamic wavelength switching and selective restoration. Chapter 4 explains the author?s contributions in designing dynamic networking at the sub-wavelength switching granularity, which can provide greater network efficiency due to its finer granularity. To support dynamic switching, regeneration, adding/dropping, and control decisions on each individual packet, the cross-layer enabled node architecture is enhanced with a FPGA controller that brings much more precise timing and control to the switching, OPM, and control planes. Furthermore, QoS-aware packet protection and dynamic switching, dropping, and regeneration functionalities were experimentally demonstrated in a multi-node network. Chapter 5 describes a technique to perform optical grooming, a process of optically combining multiple incoming data streams into a single data stream, which can simultaneously achieve greater bandwidth utilization and increased spectral efficiency. In addition, an experimental demonstration highlighting a fully functioning multi-node, agile optical networking platform is detailed. Finally, a summary and discussion of future work is provided in Chapter 6. The future of the Internet is very exciting, filled with not-yet-invented applications and services driven by cloud computing and Internet-of-Things. The author is cautiously optimistic that agile, dynamically reconfigurable optical networking is the solution to realizing this future.

Full Text Available Interference, in wireless networks, is a central phenomenon when multiple uncoordinated links share a common communication medium. The study of the interference channel was initiated by Shannon in 1961 and since then this problem has been thoroughly elaborated at the Information theoretic level but its characterization still remains an open issue. When multiple uncoordinated links share a common medium the effect of interference is a crucial limiting factor for network performance. In this work, using crosslayer cooperative communication techniques, we study how to compensate interference in the context of wireless biomedical networks, where many links transferring biomedical or other health related data may be formed and suffer from all other interfering transmissions, to allow successful receptions and improve the overall network performance. We define the interference limited communication range to be the critical communication region around a receiver, with a number of surrounding interfering nodes, within which a successful communication link can be formed. Our results indicate that we can achieve more successful transmissions by adapting the transmission rate and power, to the path loss exponent, and the selected mode of the underline communication technique allowing interference mitigation and when possible lower power consumption and increase achievable transmission rates.

Full Text Available There is a trend towards using wireless technologies in networked control systems. However, the adverse properties of the radio channels make it difficult to design and implement control systems in wireless environments. To attack the uncertainty in available communication resources in wireless control systems closed over WLAN, a cross-layer adaptive feedback scheduling (CLAFS scheme is developed, which takes advantage of the co-design of control and wireless communications. By exploiting crosslayer design, CLAFS adjusts the sampling periods of control systems at the application layer based on information about deadline miss ratio and transmission rate from the physical layer. Within the framework of feedback scheduling, the control performance is maximized through controlling the deadline miss ratio. Key design parameters of the feedback scheduler are adapted to dynamic changes in the channel condition. An eventdriven invocation mechanism for the feedback scheduler is also developed. Simulation results show that the proposed approach is efficient in dealing with channel capacity variations and noise interference, thus providing an enabling technology for control over WLAN.

Different from the traditional wired network, the fundamental cause of transmission congestion in wireless ad hoc networks is medium contention. How to utilize the congestion state from the MAC (Media Access Control) layer to adjust the transmission rate is core work for transport protocol design. However, recent works have shown that the existing cross-layer congestion detection solutions are too complex to be deployed or not able to characterize the congestion accurately. We first propose a new congestion metric called frame transmission efficiency (i.e., the ratio of successful transmission delay to the frame service delay), which describes the medium contention in a fast and accurate manner. We further present the design and implementation of RECN (ECN and the ratio of successful transmission delay to the frame service delay in the MAC layer, namely, the frame transmission efficiency), a general supporting scheme that adjusts the transport sending rate through a standard ECN (Explicit Congestion Notification) signaling method. Our method can be deployed on commodity switches with small firmware updates, while making no modification on end hosts. We integrate RECN transparently (i.e., without modification) with TCP on NS2 simulation. The experimental results show that RECN remarkably improves network goodput across multiple concurrent TCP flows.

The geometric rate of improvement of transistor size and integrated circuit performance known as Moore's Law has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. Looking forward, increasing unpredictability threatens our ability to continue scaling integrated circuits at Moore's Law rates. As the transistors and wires that make up integrated circuits become smaller, they display both greater differences in behavior among devices designed to be identical and greater vulnerability to transient and permanent faults. Conventional design techniques expend energy to tolerate this unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. However, the rising energy costs needed to compensate for increasing unpredictability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor on integrated circuit performance and energy efficiency is a national concern. Reliability and energy consumption are both reaching key inflection points that, together, threaten to reduce or end the benefits of feature size reduction. To continue beneficial scaling, we must use a cross-layer, Jull-system-design approach to reliability. Unlike current systems, which charge every device a substantial energy tax in order to guarantee

Full Text Available We propose a cross-layer optimization strategy that jointly optimizes the application layer, the data-link layer, and the physical layer of a wireless protocol stack using an application-oriented objective function. The cross-layer optimization framework provides efficient allocation of wireless network resources across multiple types of applications run by different users to maximize network resource usage and user perceived quality of service. We define a novel optimization scheme based on the mean opinion score (MOS as the unifying metric over different application classes. Our experiments, applied to scenarios where users simultaneously run three types of applications, namely voice communication, streaming video and file download, confirm that MOS-based optimization leads to significant improvement in terms of user perceived quality when compared to conventional throughput-based optimization.

Streaming video in consumer homes over wireless IEEE 802.11 networks is becoming commonplace. Wireless 802.11 networks pose unique difficulties for streaming high definition (HD), low latency video due to their error-prone physical layer and media access procedures which were not designed for real-time traffic. HD video streaming, even with sophisticated H.264 encoding, is particularly challenging due to the large number of packet fragments per slice. Cross-layerdesign strategies have been proposed to address the issues of video streaming over 802.11. These designs increase streaming robustness by imposing some degree of monitoring and control over 802.11 parameters from application level, or by making the 802.11 layer media-aware. Important contributions are made, but none of the existing approaches directly take the 802.11 queuing into account. In this paper we take a different approach and propose a cross-layerdesign allowing direct, expedient control over the wireless packet queue, while obtaining timely feedback on transmission status for each packet in a media flow. This method can be fully implemented on a media sender with no explicit support or changes required to the media client. We assume that due to congestion or deteriorating signal-to-noise levels, the available throughput may drop substantially for extended periods of time, and thus propose video source adaptation methods that allow matching the bit-rate to available throughput. A particular H.264 slice encoding is presented to enable seamless stream switching between streams at multiple bit-rates, and we explore using new computationally efficient transcoding methods when only a high bit-rate stream is available.

Videoconferencing transmission over wireless channels presents relevant challenges in mobile scenarios at vehicular speeds. Previous contributions are focused on the optimization of the transmission of multimedia and delay-sensitive applications over the forward link. In this paper, a new Quality of Service (QoS) parameter adaptation scheme is proposed. This scheme applies the Cross-LayerDesign technique on the reverse link of an 1xEV-DO Revision 0 channel. As the wireless channel parameters...

The Optimal way to improve Wireless Mesh Networks (WMNs) performance is to use a better network protocol, but whether layered-protocol design or cross-layerdesign is a better option to optimize protocol performance in WMNs is still an on-going research topic. In this paper, we focus on cross-layer protocol as a better option with respect to layered-protocol. The layered protocol architecture (OSI) model divides networking tasks into layers and defines a pocket of services for each layer to b...

Full Text Available Fair and efficient scheduling is a key issue in cross-layerdesign for wireless communication systems, such as 3GPP LTE and WiMAX. However, few works have considered the multiaccess of the traffic with differential QoS requirements in wireless systems. In this paper, we will consider an OFDMA-based wireless system with four types of traffic associated with differential QoS requirements, namely, minimum reserved rate, maximum sustainable rate, maximum latency, and tolerant jitter. Given these QoS requirements, the traffic scheduling will be formulated into a cross-layer optimization problem, which is convex fortunately. By separating the power allocation through the waterfilling algorithm in each user, this problem will further reduce to a kind of continuous quadratic knapsack problem in the base station which yields low complexity. It is then demonstrated that the proposed cross-layer method cannot only guarantee the application layer QoS requirements, but also minimizes the integrated residual workload in the MAC layer. To further enhance the ability of QoS assurance in heavily loaded scenario, a call admission control scheme will also be proposed. The simulation results show that the QoS requirements for the four types of traffic are guaranteed effectively by the proposed algorithms.

The interest in wireless communications has grown constantly for the past decades, leading to an enormous number of applications and services embraced by billions of users. In order to meet the increasing demand for mobile Internet access, several high data-rate radio networking technologies have been proposed to offer wide area high-speed wireless communications, eventually replacing fixed (wired) networks for many applications. This thesis considers cross-layer optimization of multi-hop rad...

We propose an MAC centric cross-layer approach to address the problem of multimedia transmission over cognitive Ultra Wideband (C-UWB) networks. Several fundamental design issues, which are related to application (APP), medium access control (MAC), and physical (PHY) layer, are discussed. Although

Multiple load cases and the consideration of strength is a reality that most structural designs are exposed to. Improved possibility to produce specific materials, say by fiber lay-up, put focus on research on free material optimization. A formulation for such design problems together with a prac......Multiple load cases and the consideration of strength is a reality that most structural designs are exposed to. Improved possibility to produce specific materials, say by fiber lay-up, put focus on research on free material optimization. A formulation for such design problems together...... with a practical recursive design procedure is presented and illustrated with examples. The presented finite element analysis involve many elements as well as many load cases. Separating the local amount of material from a description with unit trace for the local anisotropy, gives the free materials formulation...... a more physical interpretation of the material constraint....

This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

Mobile Ad hoc NETwork (MANET) consists of a set of mobile hosts which can operate independently without infrastructure base stations. Energy saving is a critical issue for MANET since most mobile hosts will operate on battery powers. A crosslayer coordinated framework for energy saving is proposed in this letter. On-demand power management, physical layer and medium access control layer dialogue based multi-packet reception, mobile agent based topology discovery and topology control based transmit power-aware and battery power-aware dynamic source routing are some of new ideas in this framework.

Full Text Available The LTE standard is a leading standard in the wireless broadband market. The Radio Resource Management at the base station plays a major role in satisfying users demand for high data rates and quality of service. This paper evaluates a crosslayer scheduling algorithm that aims at minimizing the resource utilization. The algorithm makes decisions based on channel conditions, the size of transmission buffers and different quality of service demands. Simulation results show that the new algorithm improves the resource utilization and provides better guarantees for service quality.

Full Text Available We propose an adaptive video transmission scheme to achieve unequal error protection in a closed loop multiple input multiple output (MIMO system for wavelet-based video coding. In this scheme, visual entropy is employed as a video quality metric in agreement with the human visual system (HVS, and the associated visual weight is used to obtain a set of optimal powers in the MIMO system for maximizing the visual quality of the reconstructed video. For ease of cross-layer optimization, the video sequence is divided into several streams, and the visual importance of each stream is quantified using the visual weight. Moreover, an adaptive load balance control, named equal termination scheduling (ETS, is proposed to improve the throughput of visually important data with higher priority. An optimal solution for power allocation is derived as a closed form using a Lagrangian relaxation method. In the simulation results, a highly improved visual quality is demonstrated in the reconstructed video via the cross-layer approach by means of visual entropy.

Full Text Available In order to achieve seamless handover for real-time applications in the IP Multimedia Subsystem (IMS of next generation network, a multiprotocol combined handover mechanism is proposed in this paper. We combine SIP (Session Initiation Protocol, FMIP (Fast Mobile IPv6 Protocol, and MIH (Media Independent Handover protocols by cross-layerdesign and optimize those protocols' signaling flows to improve the performance of vertical handover. Theoretical analysis and simulation results illustrate that our proposed mechanism performs better than the original SIP and MIH combined handover mechanism in terms of service interruption time and packet loss.

The next generation, multi-detector active and passive computed tomography (A ampersand PCT) scanner will be optimized for speed and accuracy. At the Lawrence Livermore National Lab (LLNL) we have demonstrated the trade-offs between different A ampersand PCT design parameters that affect the speed and quality of the assay results. These fundamental parameters govern the optimum system design. Although the multi-detector scanner design has priority put on speed to increase waste drum throughput, higher speed should not compromise assay accuracy. One way to increase the speed of the A ampersand PCT technology is to use multiple detectors. This yields a linear speedup by a factor approximately equal to the number of detectors used without a compromise in system accuracy. There are many different design scenarios that can be developed using multiple detectors. Here we describe four different scenarios and discuss the trade-offs between them. Also, some considerations are given in this design description for the implementation of a multiple detector technology in a field- deployable mobile trailer system

Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.

We are rapidly approaching an inflection point where the conventional target of producing perfect, identical transistors that operate without upset can no longer be maintained while continuing to reduce the energy per operation. With power requirements already limiting chip performance, continuing to demand perfect, upset-free transistors would mean the end of scaling benefits. The big challenges in device variability and reliability are driven by uncommon tails in distributions, infrequent upsets, one-size-fits-all technology requirements, and a lack of information about the context of each operation. Solutions co-designed across traditional layer boundaries in our system stack can change the game, allowing architecture and software (a) to compensate for uncommon variation, environments, and events, (b) to pass down invariants and requirements for the computation, and (c) to monitor the health of collections of deVices. Cross-layer codesign provides a path to continue extracting benefits from further scaled technologies despite the fact that they may be less predictable and more variable. While some limited multi-layer mitigation strategies do exist, to move forward redefining traditional layer abstractions and developing a framework that facilitates cross-layer collaboration is necessary.

Full Text Available Most reactive routing protocols in MANETs employ a random delay between rebroadcasting route requests (RREQ in order to avoid "broadcast storms." However this can lead to problems such as "next hop racing" and "rebroadcast redundancy." In addition to this, existing routing protocols for MANETs usually take a single routing strategy for all flows. This may lead to inefficient use of resources. In this paper we propose a cross-layer route discovery framework (CRDF to address these problems by exploiting the cross-layer information. CRDF solves the above problems efficiently and enables a new technique: routing strategy automation (RoSAuto. RoSAuto refers to the technique that each source node automatically decides the routing strategy based on the application requirements and each intermediate node further adapts the routing strategy so that the network resource usage can be optimized. To demonstrate the effectiveness and the efficiency of CRDF, we design and evaluate a macrobian route discovery strategy under CRDF.

It is well known that the evolution of 4G-based mobile multimedia network systems will contribute significantly to future mobile healthcare (m-health) applications that require high bandwidth and fast data rates. Central to the success of such emerging applications is the compatibility of broadband networks, such as mobile Worldwide Interoperability For Microwave Access (WiMAX) and High-Speed Uplink Packet Access (HSUPA), and especially their rate adaption issues combined with the acceptable real-time medical quality of service requirements. In this paper, we address the relevant challenges of cross-layerdesign requirements for real-time rate adaptation of ultrasound video streaming in mobile WiMAX and HSUPA networks. A comparative performance analysis of such approach is validated in two experimental m-health test bed systems for both mobile WiMAX and HSUPA networks. The experimental results have shown an improved performance of mobile WiMAX compared to the HSUPA using the same cross-layer optimization approach.

In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

Full Text Available Heterogeneous multicast is an efficient communication scheme especially for multimedia applications running over multihop networks. The term heterogeneous refers to the phenomenon when multicast receivers in the same session require service at different rates commensurate with their capabilities. In this paper, we address the problem of resource allocation for a set of heterogeneous multicast sessions over multihop wireless networks. We propose an iterative algorithm that achieves the optimal rates for a set of heterogeneous multicast sessions such that the aggregate utility for all sessions is maximized. We present the formulation of the multicast resource allocation problem as a nonlinear optimization model and highlight the cross-layer framework that can solve this problem in a distributed ad hoc network environment with asynchronous computations. Our simulations show that the algorithm achieves optimal resource utilization, guarantees fairness among multicast sessions, provides flexibility in allocating rates over different parts of the multicast sessions, and adapts to changing conditions such as dynamic channel capacity and node mobility. Our results show that the proposed algorithm not only provides flexibility in allocating resources across multicast sessions, but also increases the aggregate system utility and improves the overall system throughput by almost 30% compared to homogeneous multicast.

Full Text Available An electrical “Grid” is a network that carries electricity from power plants to customer premises. Smart Grid is an assimilation of electrical and communication infrastructure. Smart Grid is characterized by bidirectional flow of electricity and information. Smart Grid is a complex network with hierarchical architecture. Realization of complete Smart Grid architecture necessitates diverse set of communication standards and protocols. Communication network protocols are engineered and established on the basis of layered approach. Each layer is designed to produce an explicit functionality in association with other layers. Layered approach can be modified with crosslayer approach for performance enhancement. Complex and heterogeneous architecture of Smart Grid demands a deviation from primitive approach and reworking of an innovative approach. This paper describes a joint or crosslayer optimization of Smart Grid home/building area network based on IEEE 802.11 standard using RIVERBED OPNET network design and simulation tool. The network performance can be improved by selecting various parameters pertaining to different layers. Simulation results are obtained for various parameters such as WLAN throughput, delay, media access delay, and retransmission attempts. The graphical results show that various parameters have divergent effects on network performance. For example, frame aggregation decreases overall delay but the network throughput is also reduced. To prevail over this effect, frame aggregation is used in combination with RTS and fragmentation mechanisms. The results show that this combination notably improves network performance. Higher value of buffer size considerably increases throughput but the delay is also greater and thus the choice of optimum value of buffer size is inevitable for network performance optimization. Parameter optimization significantly enhances the performance of a designed network. This paper is expected to serve

In order to ensure reliable data transmission on the data plane and minimize resource consumption, a novel protection strategy towards data plane is proposed in software defined optical networks (SDON). Firstly, we establish a SDON architecture with hierarchical structure of data plane, which divides the data plane into four layers for getting fine-grained bandwidth resource. Then, we design the cross-layer routing and resource allocation based on this network architecture. Through jointly considering the bandwidth resource on all the layers, the SDN controller could allocate bandwidth resource to working path and backup path in an economical manner. Next, we construct auxiliary graphs and transform the shared protection problem into the graph vertex coloring problem. Therefore, the resource consumption on backup paths can be reduced further. The simulation results demonstrate that the proposed protection strategy can achieve lower protection overhead and higher resource utilization ratio.

A redox flow battery is provided. The redox flow battery involves multiple-membrane (at least one cation exchange membrane and at least one anion exchange membrane), multiple-electrolyte (one electrolyte in contact with the negative electrode, one electrolyte in contact with the positive electrode, and at least one electrolyte disposed between the two membranes) as the basic characteristic, such as a double-membrane, triple electrolyte (DMTE) configuration or a triple-membrane, quadruple electrolyte (TMQE) configuration. The cation exchange membrane is used to separate the negative or positive electrolyte and the middle electrolyte, and the anion exchange membrane is used to separate the middle electrolyte and the positive or negative electrolyte.

This paper discusses the conceptual design and the development of a preliminary model of a multiple parallel switching (MPS) controller. The introduction of several advanced controllers has widened and improved the control capability of nonlinear dynamical systems. However, it is not possible to uniquely define a controller that always outperforms the others, and, in many situations, the controller providing the best control action depends on the operating conditions and on the intrinsic properties and behavior of the controlled dynamical system. The desire to combine the control action of several controllers with the purpose to continuously attain the best control action has motivated the development of the MPS controller. The MPS controller consists of a number of single controllers acting in parallel and of an artificial intelligence (AI) based selecting mechanism. The AI selecting mechanism analyzes the output of each controller and implements the one providing the best control performance. An inherent property of the MPS controller is the possibility to discard unreliable controllers while still being able to perform the control action. To demonstrate the feasibility and the capability of the MPS controller the simulation of the on-line operation control of a fast breeder reactor (FBR) evaporator is presented. (author)

This paper presents a robust, dynamic cross-layer wireless communication architecture for wireless networked control systems. Each layer in the proposed protocol architecture contributes to the overall goal of reliable, energy efficient communication. The protocol stack also features a

Full Text Available The issue of adaptive and distributed cross-layer resource allocation for energy efficiency in uplink code-division multiple-access (CDMA wireless data networks is addressed. The resource allocation problems are formulated as noncooperative games wherein each terminal seeks to maximize its own energy efficiency, namely, the number of reliably transmitted information symbols per unit of energy used for transmission. The focus of this paper is on the issue of adaptive and distributed implementation of policies arising from this approach, that is, it is assumed that only readily available measurements, such as the received data, are available at the receiver in order to play the considered games. Both single-cell and multicell networks are considered. Stochastic implementations of noncooperative games for power allocation, spreading code allocation, and choice of the uplink (linear receiver are thus proposed, and analytical results describing the convergence properties of selected stochastic algorithms are also given. Extensive simulation results show that, in many instances of practical interest, the proposed stochastic algorithms approach with satisfactory accuracy the performance of nonadaptive games, whose implementation requires much more prior information.

Full Text Available The emergence of mobile and battery operated multimedia systems and the diversity of supported applications mount new challenges in terms of design efficiency of these systems which must provide a maximum application quality of service (QoS in the presence of a dynamically varying environment. These optimization problems cannot be entirely solved at design time and some efficiency gains can be obtained at run-time by means of self-adaptivity. In this paper, we propose a new cross-layer hardware (HW/software (SW adaptation solution for embedded mobile systems. It supports application QoS under real-time and lifetime constraints via coordinated adaptation in the hardware, operating system (OS, and application layers. Our method relies on an original middleware solution used on both global and local managers. The global manager (GM handles large, long-term variations whereas the local manager (LM is used to guarantee real-time constraints. The GM acts in three layers whereas the LM acts in application and OS layers only. The main role of GM is to select the best configuration for each application to meet the constraints of the system and respect the preferences of the user. The proposed approach has been applied to a 3D graphics application and successfully implemented on an Altera FPGA.

This paper proposes a switching anti-windup design, which aims to enlarge the domain of attraction of the closed-loop system. Multiple anti-windup gains along with an index function that orchestrates the switching among these anti-windup gains are designed based on the min function of multiple

Information .ow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications...

The design of a miniature multiple beam klystron (MBK) working in the Ku-band frequency range is presented in this article. Starting from the main design parameters, design of the electron gun, the input and output couplers and radio frequency section (RF-section) are presented. The design methodology using state of the art commercial electromagnetic design tools, analytical formulae as well as noncommercial design tools are briefly presented in this article.

The complex casting machine has been designed to perform the following techniques: gravity casting, stir casting, squeeze casting, vacuum casting, compocasting and thixoforming. All these casting techniques have been integrated into this complex casting machine as different units which work with the help of automation.

Production systems enabling both cost efficiency and flexibility in terms of high product variation are explored. The study follows an explorative longitudinal field study approach. The database consists of three large global corporations, each consisting of several companies producing household ......-outs, worker skills, integration of distribution channels, after sales service and degree of servitization. Three production system design principles called VXY emerge....

Wireless image transmission is critical in many applications, such as surveillance and environment monitoring. In order to make the best use of the limited energy of the battery-operated cameras, while satisfying the application-level image quality constraints, cross-layerdesign is critical. In this paper, we develop an image transmission model that allows the application layer (e.g., the user) to specify an image quality constraint, and optimizes the lower layer parameters of transmit power and packet length, to minimize the energy dissipation in image transmission over a given distance. The effectiveness of this approach is evaluated by applying the proposed energy optimization to a reference ZigBee system and a WiFi system, and also by comparing to an energy optimization study that does not consider any image quality constraint. Evaluations show that our scheme outperforms the default settings of the investigated commercial devices and saves a significant amount of energy at middle-to-large transmission distances.

Full Text Available Although the conventional duty cycle MAC protocols for Wireless Sensor Networks (WSNs such as RMAC perform well in terms of saving energy and reducing end-to-end delivery latency, they were designed independently and require an extra routing protocol in the network layer to provide path information for the MAC layer. In this paper, we propose a new cross-layer duty cycle MAC protocol with data forwarding supporting a pipeline feature (P-MAC for WSNs. P-MAC first divides the whole network into many grades around the sink. Each node identifies its grade according to its logical hop distance to the sink and simultaneously establishes a sleep/wakeup schedule using the grade information. Those nodes in the same grade keep the same schedule, which is staggered with the schedule of the nodes in the adjacent grade. Then a variation of the RTS/CTS handshake mechanism is used to forward data continuously in a pipeline fashion from the higher grade to the lower grade nodes and finally to the sink. No extra routing overhead is needed, thus increasing the network scalability while maintaining the superiority of duty-cycling. The simulation results in OPNET show that P-MAC has better performance than S-MAC and RMAC in terms of packet delivery latency and energy efficiency.

Full Text Available Multihop mobile wireless networks have drawn a lot of attention in recent years thanks to their wide applicability in civil and military environments. Since the existing IEEE 802.11 distributed coordination function (DCF standard does not provide satisfactory access to the wireless medium in multihop mobile networks, we have designed a cross-layer protocol, (CroSs-layer noise aware power driven MAC (SNAPdMac, which consists of two parts. The protocol first concentrates on the flexible adjustment of the upper and lower bounds of the contention window (CW to lower the number of collisions. In addition, it uses a power control scheme, triggered by the medium access control (MAC layer, to limit the waste of energy and also to decrease the number of collisions. Thanks to a noticeable energy conservation and decrease of the number of collisions, it prolongs significantly the lifetime of the network and delays the death of the first node while increasing both the throughput performance and the sending bit rate/throughput fairness among contending flows.

Full Text Available The flexibility of cognitive and software-defined radio heralds an opportunity for researchers to reexamine how network protocol layers operate with respect to providing quality of service aware transmission among wireless nodes. This opportunity is enhanced by the continued development of spectrally responsive devices—ones that can detect and respond to changes in the radio frequency environment. Present wireless network protocols define reliability and other performance-related tasks narrowly within layers. For example, the frame size employed on 802.11 can substantially influence the throughput, delay, and jitter experienced by an application, but there is no simple way to adapt this parameter. Furthermore, while the data link layer of 802.11 provides error detection capabilities across a link, it does not specify additional features, such as forward error correction schemes, nor does it provide a means for throttling retransmissions at the transport layer (currently, the data link and transport layer can function counterproductively with respect to reliability. This paper presents an analysis of the interaction of physical, data link, and network layer parameters with respect to throughput, bit error rate, delay, and jitter. The goal of this analysis is to identify opportunities where system designers might exploit cross-layer interactions to improve the performance of Voice over IP (VoIP, instant messaging (IM, and file transfer applications.

of different mappings of tasks to processors (software or hardware) including memory usage, and effects of RTOS selection, including scheduling, synchronization and resource allocation policies. In this presentation we focus on the programmer’s view illustrated through a design space exploration of a multi...

Full Text Available We propose an MAC centric cross-layer approach to address the problem of multimedia transmission over cognitive Ultra Wideband (C-UWB networks. Several fundamental design issues, which are related to application (APP, medium access control (MAC, and physical (PHY layer, are discussed. Although substantial research has been carried out in the PHY layer perspective of cognitive radio system, this paper attempts to extend the existing research paradigm to MAC and APP layers, which can be considered as premature at this time. This paper proposed a cross-layerdesign that is aware of (a UWB wireless channel conditions, (b time slot allocations at the MAC layer, and (c MPEG-4 video at the APP layer. Two cooperative sensing mechanisms, namely, AND and OR, are analyzed in terms of probability of detection ( , probability of false alarm ( , and the required sensing period. Then, the impact of sensing scheduling to the MPEG-4 video transmission over wireless cognitive UWB networks is observed. In addition, we also proposed the packet reception rate- (PRR- based resource allocation scheme that is aware of the channel condition, target PRR, and queue status.

Wireless mesh networks has rapid development over the last few years. However, due to properties such as distributed infrastructure and interference, which strongly affect the performance of wireless mesh networks, developing technology has to face the challenge of architecture and protocol design issues. Traditional layered protocols do not function efficiently in multi-hop wireless environments. To get deeper understanding on interaction of the layered protocols and optimize the performance...

time taken for solidification, plays an important role in the casting. There should not ... Keywords: Design, Construction, Multiple casting machine, Compo Casting operation. 1. Introduction .... metal and pathway channel pipe with heater is used.

In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

The design of multiple bolted connections in accordance with Appendix E of the National Design Specification for Wood Construction (NDS) has incorporated provisions for evaluating localized member failure modes of row and group tear-out when the connections are closely spaced. Originally based on structural glued laminated timber (glulam) members made with all L1...

The IP over optical transport network is a very promising networking architecture applied to the interconnection of geographically distributed data centers due to the performance guarantee of low delay, huge bandwidth and high reliability at a low cost. It can enable efficient resource utilization and support heterogeneous bandwidth demands in highly-available, cost-effective and energy-effective manner. In case of cross-layer link failure, to ensure a high-level quality of service (QoS) for user request after the failure becomes a research focus. In this paper, we propose a novel cross-layer restoration scheme for data center services with software defined networking based on IP over optical network. The cross-layer restoration scheme can enable joint optimization of IP network and optical network resources, and enhance the data center service restoration responsiveness to the dynamic end-to-end service demands. We quantitatively evaluate the feasibility and performances through the simulation under heavy traffic load scenario in terms of path blocking probability and path restoration latency. Numeric results show that the cross-layer restoration scheme improves the recovery success rate and minimizes the overall recovery time.

Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908

Full Text Available A biometric recognition system is one of the leading candidates for the current and the next generation of smart visual systems. The visual system is the engine of the surveillance cameras that have great importance for intelligence and security purposes. These surveillance devices can be a target of adversaries for accomplishing various malicious scenarios such as disabling the camera in critical times or the lack of recognition of a criminal. In this work, we propose a cross-layer biometric recognition system that has small computational complexity and is suitable for mobile Internet of Things (IoT devices. Furthermore, due to the involvement of both hardware and software in realizing this system in a decussate and chaining structure, it is easier to locate and provide alternative paths for the system flow in the case of an attack. For security analysis of this system, one of the elements of this system named the advanced encryption standard (AES is infected by four different Hardware Trojansthat target different parts of this module. The purpose of these Trojans is to sabotage the biometric data that are under process by the biometric recognition system. All of the software and the hardware modules of this system are implemented using MATLAB and Verilog HDL, respectively. According to the performance evaluation results, the system shows an acceptable performance in recognizing healthy biometric data. It is able to detect the infected data, as well. With respect to its hardware results, the system may not contribute significantly to the hardware design parameters of a surveillance camera considering all the hardware elements within the device.

Full Text Available Three different multiple-valued logic (MVL designs using the multiple-peak negative-differential-resistance (NDR circuits are investigated. The basic NDR element, which is made of several Si-based metal-oxide-semiconductor field-effect-transistor (MOS and SiGe-based heterojunction-bipolar-transistor (HBT devices, can be implemented by using a standard BiCMOS process. These MVL circuits are designed based on the triggering-pulse control, saw-tooth input signal, and peak-control methods, respectively. However, there are some transient states existing between the multiple stable levels for the first two methods. These states might affect the circuit function in practical application. As a result, our proposed peak-control method for the MVL design can be used to overcome these transient states.

on a multiple case study of three mobile application development firms from Sweden, Denmark and Norway, we synthesize the digital service design taxonomy to understand the challenges faced by third-party developers. Our study identifies a set of challenges in four different levels: user level, platform level...... to tap into and join the digital ecosystem. However, while there is an emerging literature on designing digital services, little empirical evidence exists about challenges faced by third-party developers while designing digital services, and in particular for multiple mobile platforms. Drawing......The value of digital services is increasingly recognized by owners of digital platforms. These services have central role in building and sustaining the business of the digital platform. In order to sustain the design of digital services, owners of digital platforms encourage third-party developers...

It has been widely recognised over the recent years that parallel modulation of multiple biological targets can be beneficial for treatment of diseases with complex etiologies such as cancer asthma, and psychiatric disease. In this article, current strategies for the generation of ligands with a specific multi-target profile (designedmultiple ligands or DMLs) are described and a number of illustrative example are given. Designingmultiple ligands is frequently a challenging endeavour for medicinal chemists, with the need to appropriately balance affinity for 2 or more targets whilst obtaining physicochemical and pharmacokinetic properties that are consistent with the administration of an oral drug. Given that the properties of DMLs are influenced to a large extent by the proteomic superfamily to which the targets belong and the lead generation strategy that is pursued, an early assessment of the feasibility of any given DML project is essential.

Full text: The analysis of active samples on regular basis for ambient air activity and floor contamination from radio chemical lab accounts for major chunk of the operational activity in Health Physicist's responsibility. The requirement for daily air sample analysis on immediate counting and delayed counting from various labs in addition to samples of smear swipe check of lab led to the urge for development of system that could cater multiple sample analysis in a time programmed manner on a single sample loading. A multiple alpha/beta counting system for counting was designed and fabricated. It has arrangements for loading 10 samples in slots in order, get counted in a time programmed manner with results displayed and records maintained in PC. The paper describes the design and development of multiple sample counting setup presently in use at the facility has resulted in reduction of man-hour consumption in counting and recording of the results

Full Text Available In distributed systems real time optimizations need to be performed dynamically for better utilization of the network resources. Real time optimizations can be performed effectively by using CrossLayer Optimization (CLO within the network operating system. This paper presents the performance evaluation of CrossLayer Optimization (CLO in comparison with the traditional approach of Single-Layer Optimization (SLO. In the parallel implementation of the approaches the experimental study carried out indicates that the CLO results in a significant improvement in network utilization when compared to SLO. A variant of the Particle Swarm Optimization technique that utilizes Digital Pheromones (PSODP for better performance has been used here. A significantly higher speed up in performance was observed from the parallel implementation of CLO that used PSODP on a cluster of nodes.

Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes’ placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper. PMID:26828500

To maximize throughput and to satisfy users' requirements in cognitive radios, a cross-layer optimization problem combining adaptive modulation and power control at the physical layer and truncated automatic repeat request at the medium access control layer is proposed. Simulation results show the combination of power control, adaptive modulation, and truncated automatic repeat request can regulate transmitter powers and increase the total throughput effectively.

Designing the multiple robot work cells is very knowledge-intensive, intricate, and time-consuming process. This paper elaborates the development process of a computer-aided design program for generating the multiple robot work cells which offer a user-friendly interface. The primary purpose of this work is to provide a fast and easy platform for less cost and human involvement with minimum trial and errors adjustments. The automated platform is constructed based on the variant-shaped configuration concept with its mathematical model. A robot work cell layout, system components, and construction procedure of the automated platform are discussed in this paper where integration of these items will be able to automatically provide the optimum robot work cell design according to the information set by the user. This system is implemented on top of CATIA V5 software and utilises its Part Design, Assembly Design, and Macro tool. The current outcomes of this work provide a basis for future investigation in developing a flexible configuration system for the multiple robot work cells.

Full Text Available Assuming a wireless ad hoc network consisting of homogeneous video users with each of them also serving as a possible relay node for other users, we propose a cross-layer rate-control scheme based on an analytical study of how the effective video transmission rate is affected by the prevailing operating parameters, such as the interference environment, the number of transmission hops to a destination, and the packet loss rate. Furthermore, in order to provide error-resilient video delivery over such wireless ad hoc networks, a cross-layer joint source-channel coding (JSCC approach, to be used in conjunction with rate-control, is proposed and investigated. This approach attempts to optimally apply the appropriate channel coding rate given the constraints imposed by the effective transmission rate obtained from the proposed rate-control scheme, the allowable real-time video play-out delay, and the prevailing channel conditions. Simulation results are provided which demonstrate the effectiveness of the proposed cross-layer combined rate-control and JSCC approach.

Full Text Available In this paper, the problem of cognitive radar (CR waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target impulse response (TIR. To address this problem, an improved algorithm is employed for target detection by maximizing the detection probability of the received echo on the promise of ensuring the TIR estimation precision. In this algorithm, an additional weight vector is introduced to achieve a trade-off among different targets. Both the estimate of TIR and transmit waveform can be updated at each step based on the previous step. Under the same constraint on waveform energy and bandwidth, the information theoretical approach is also considered. In addition, the relationship between the waveforms that are designed based on the two criteria is discussed. Unlike most existing works that only consider single target with temporally correlated characteristics, waveform design for multiple extended targets is considered in this method. Simulation results demonstrate that compared with linear frequency modulated (LFM signal, waveforms designed based on maximum detection probability and maximum mutual information (MI criteria can make radar echoes contain more multiple-target information and improve radar performance as a result.

In order to design and realize an efficient building refurbishment, it is necessary to carry out an exhaustive investigation of all solutions that form it. The efficiency level of the considered building's refurbishment depends on a great many of factors, including: cost of refurbishment, annual fuel economy after refurbishment, tentative pay-back time, harmfulness to health of the materials used, aesthetics, maintenance properties, functionality, comfort, sound insulation and longevity, etc. Solutions of an alternative character allow for a more rational and realistic assessment of economic, ecological, legislative, climatic, social and political conditions, traditions and for better the satisfaction of customer requirements. They also enable one to cut down on refurbishment costs. In carrying out the multivariant design and multiple criteria analysis of a building refurbishment much data was processed and evaluated. Feasible alternatives could be as many as 100,000. How to perform a multivariant design and multiple criteria analysis of alternate alternatives based on the enormous amount of information became the problem. Method of multivariant design and multiple criteria of a building refurbishment's analysis were developed by the authors to solve the above problems. In order to demonstrate the developed method, a practical example is presented in this paper. (author)

The rapid development of metasurfaces has enabled numerous intriguing applications with acoustically thin sheets. Here we report the theory and experimental realization of a nonresonant sound-absorbing strategy using metasurfaces by harnessing multiple internal reflections. We theoretically and numerically show that the higher-order diffraction of thin gradient-index metasurfaces is tied to multiple internal reflections inside the unit cells. Highly absorbing acoustic metasurfaces can be realized by enforcing multiple internal reflections together with a small amount of loss. A reflective gradient-index acoustic metasurface is designed based on the theory, and we further experimentally verify the performance using a three-dimensional printed prototype. Measurements show over 99% energy absorption at the peak frequency and a 95% energy absorption bandwidth of around 600 Hz. The proposed mechanism provides an alternative route for sound absorption without the necessity of high absorption of the individual unit cells.

The conceptual design of heavy ion fusion drivers has now reached a state, where the overall approach has become fairly clear. One design features an RF linac plus current and beam multiplication rings. The present remarks concern the assignment of multiturn injection, beam storage and bunching to an optimized number of rings and transport lines, as well as some criteria for their designs. The main parameter constraints are discussed, showing how they can be met, although there is little flexibility at the present stage of understanding and technology. A shortened version of this report is scheduled for presentation at the ''INS International Symposium on Heavy Ion Accelerators and Their Application to Inertial Fusion'' Tokyo, January 23-27 1984. (author)

Coordinated flight allows the replacement of a single monolithic spacecraft with multiple smaller ones, based on the principle of distributed systems. According to the mission objectives and to ensure a safe relative motion, constraints on the relative distances need to be satisfied. Initially, differential perturbations are limited by proper orbit design. Then, the induced differential drifts can be properly handled through corrective maneuvers. In this work, several designs are surveyed, defining the initial configuration of a group of spacecraft while counteracting the differential perturbations. For each of the investigated designs, focus is placed upon the number of deployable spacecraft and on the possibility to ensure safe relative motion through station keeping of the initial configuration, with particular attention to the required Δ V budget and the constraints violations.

Atmospheric probes have been successfully flown to planets and moons in the solar system to conduct in situ measurements. They include the Pioneer Venus multi-probes, the Galileo Jupiter probe, and Huygens probe. Probe mission concepts to five destinations, including Venus, Jupiter, Saturn, Uranus, and Neptune, have all utilized similar-shaped aeroshells and concept of operations, namely a 45-degree sphere cone shape with high density heatshield material and parachute system for extracting the descent vehicle from the aeroshell. Each concept designed its probe to meet specific mission requirements and to optimize mass, volume, and cost. At the 2017 International Planetary Probe Workshop (IPPW), NASA Headquarters postulated that a common aeroshell design could be used successfully for multiple destinations and missions. This "common probe"� design could even be assembled with multiple copies, properly stored, and made available for future NASA missions, potentially realizing savings in cost and schedule and reducing the risk of losing technologies and skills difficult to sustain over decades. Thus the NASA Planetary Science Division funded a study to investigate whether a common probe design could meet most, if not all, mission needs to the five planetary destinations with extreme entry environments. The Common Probe study involved four NASA Centers and addressed these issues, including constraints and inefficiencies that occur in specifying a common design. Study methodology: First, a notional payload of instruments for each destination was defined based on priority measurements from the Planetary Science Decadal Survey. Steep and shallow entry flight path angles (EFPA) were defined for each planet based on qualification and operational g-load limits for current, state-of-the-art instruments. Interplanetary trajectories were then identified for a bounding range of EFPA. Next, 3-degrees-of-freedom simulations for entry trajectories were run using the entry state

The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313

Full Text Available We focus on wireless multimedia communication and investigate how cross-layer information can be used to improve performance at the application layer, using JPEG2000 as an example. The cross-layer information is in the form of soft information from the physical layer. The soft information, which is supplied by a soft decision demodulator, yields reliability measures for the received bits and is fed into two soft input iterative JPEG2000 image decoders. When errors are detected with the error detecting mechanisms in JPEG2000, the decoders utilize the soft information to point out likely transmission errors. Hence, the decoders can correct errors and increase the image quality without making time-consuming retransmissions. We believe that the proposed decoding method utilizing soft information is suitable for a general IP-based network and that it keeps the principles of a layered structure of the protocol stack intact. Further, experimental results with images transmitted over a simulated wireless channel show that a simple decoding algorithm that utilizes soft information can give high gains in image quality compared to the standard hard-decision decoding.

The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.

A new design method of constructing optimum weight composite laminates for multiple loads is proposed in this paper. A netting analysis approach is used to develop an optimization procedure. Three ply orientations permit development of optimum laminate design without using stress-strain relations. It is proved that stresses in minimum weight laminate reach allowable values in each ply with given load. The optimum ply thickness is defined at maximum value among tensile and compressive loads. Two examples are given to obtain optimum ply orientations, thicknesses and materials. For comparison purposes, calculations of stresses are done in orthotropic material using classical lamination theory. Based upon these calculations, matrix degrades at 30 to 50% of ultimate load. There is no fiber failure and therefore laminates withstand all applied loads in both examples

Building on a preliminary case study of the Danish educational publisher Systime A/S and its flagship product, the web-based ‘iBog’/‘iBook’, this article explores how digital textbooks can be understood as design. The shaping of digital books is seen as being intertwined in a wider circuit...... reorganization of the publishing company, web-based user interfaces, and ultimately the branding, which market these new digital objects, are building power- ful discourses around the product. Thus it is suggested that the design process of the iBog case can be understood in a model of database-based publishing...... with multiple levels. In the final analysis, the iBog is much more than a product and a technology. It is a brand that goes beyond what can be studied by looking at the digital textbook as a singular artefact....

modified energy per good bit (MEPG) metric, with respect to the spectrum sharing user’s transmission power and media access frame length. The cellular users, legacy users, are protected by an outage probability constraint. To optimize the non

.... They present an analytical formulation of the energy expenditure associated with the communication overhead of key management, and highlight its dependence on network topology and key distribution method...

Borehole microseismic monitoring of hydraulic fracture treatments of unconventional reservoirs is a widely used method in the oil and gas industry. Sometimes, the quality of the acquired microseismic data is poor. One of the reasons for poor data quality is poor survey design. We attempt to provide a comprehensive and thorough workflow, using multiple criteria decision analysis (MCDA), to optimize planning micriseismic monitoring. So far, microseismic monitoring has been used extensively as a powerful tool for determining fracture parameters that affect the influx of formation fluids into the wellbore. The factors that affect the quality of microseismic data and their final results include average distance between microseismic events and receivers, complexity of the recorded wavefield, signal-to-noise ratio, data aperture, etc. These criteria often conflict with each other. In a typical microseismic monitoring, those factors should be considered to choose the best monitoring well(s), optimum number of required geophones, and their depth. We use MDCA to address these design challenges and develop a method that offers an optimized design out of all possible combinations to produce the best data acquisition results. We believe that this will be the first research to include the above-mentioned factors in a 3D model. Such a tool would assist companies and practicing engineers in choosing the best design parameters for future microseismic projects.

A method for analyzing shock conservation in test specifications that have been tailored to qualify a structure for multipledesign environments is discussed. Shock test conservation is qualified for shock response spectra, shock intensity spectra and ranked peak acceleration data in terms of an Index of Conservation (IOC) and an Overtest Factor (OTF). The multi-environment conservation analysis addresses the issue of both absolute and average conservation. The method is demonstrated in a case where four laboratory tests have been specified to qualify a component which must survive seven different field environments. Final judgment of the tailored test specification is shown to require an understanding of the predominant failure modes of the test item.

Full Text Available This paper addresses the problem of the evaluation of the delay distribution via analytical means in IEEE 802.11 wireless ad hoc networks. We show that the asymptotic delay distribution can be expressed as a power law. Based on the latter result, we present a cross-layer delay estimation protocol and we derive new delay-distribution-based routing algorithms, which are well adapted to the QoS requirements of real-time multimedia applications. In fact, multimedia services are not sensitive to average delays, but rather to the asymptotic delay distributions. Indeed, video streaming applications drop frames when they are received beyond a delay threshold, determined by the buffer size. Although delay-distribution-based routing is an NP-hard problem, we show that it can be solved in polynomial time when the delay threshold is large, because of the asymptotic power law distribution of the link delays.

Vertical handover decision algorithm that is based on user preferences and coupled with Media Independent Handover (MIH) local triggers have not been explored much in the literature. We have developed a comprehensive cross-layer solution, called Vertical Handover Decision (VHOD) approach, which consists of three parts viz. mechanism for collecting and storing user preferences, Vertical Handover Decision (VHOD) algorithm and the MIH Function (MIHF). MIHF triggers the VHOD algorithm which operates on user preferences to issue handover commands to mobility management protocol. VHOD algorithm is an MIH User and therefore needs to subscribe events and configure thresholds for receiving triggers from MIHF. In this regard, we have performed experiments in WLAN to suggest thresholds for Link Going Down trigger. We have also critically evaluated the handover decision process, proposed Just-in-time interface activation technique, compared our proposed approach with prominent user centric approaches and analyzed our approach from different aspects.

Full Text Available Abstract We investigate the resource allocation problem for the downlink of a multiple-input multiple-output orthogonal frequency division multiple access (MIMO-OFDMA system. The sum rate maximization itself cannot cope with fairness among users. Hence, we address this problem in the context of the utility-based resource allocation presented in earlier papers. This resource allocation method allows to enhance the efficiency and guarantee fairness among users by exploiting multiuser diversity, frequency diversity, as well as time diversity. In this paper, we treat the overall utility as the quality of service indicator and design utility functions with respect to the average transmission rate in order to simultaneously provide two services, real-time and best-effort. Since the optimal solutions are extremely computationally complex to obtain, we propose a suboptimal joint subchannel and power control algorithm that converges very fast and simplifies the MIMO resource allocation problem into a single-input single-output resource allocation problem. Simulation results indicate that using the proposed method achieves near-optimum solutions, and the available resources are distributed more fairly among users.

Full Text Available Real-time streaming media over wireless networks is a challenging proposition due to the characteristics of video data and wireless channels. In this paper, we propose a set of cross-layer techniques for adaptive real-time video streaming over wireless networks. The adaptation is done with respect to both channel and data. The proposed novel packetization scheme constructs the application layer packet in such a way that it is decomposed exactly into an integer number of equal-sized radio link protocol (RLP packets. FEC codes are applied within an application packet at the RLP packet level rather than across different application packets and thus reduce delay at the receiver. A priority-based ARQ, together with a scheduling algorithm, is applied at the application layer to retransmit only the corrupted RLP packets within an application layer packet. Our approach combines the flexibility and programmability of application layer adaptations, with low delay and bandwidth efficiency of link layer techniques. Socket-level simulations are presented to verify the effectiveness of our approach.

In the active queue management (AQM) scheme, core routers cannot manage and constrain user datagram protocol (UDP) data flows by the sliding window control mechanism in the transport layer due to the nonresponsive nature of such traffic flows. However, the UDP traffics occupy a large part of the network service nowadays which brings a great challenge to the stability of the more and more complex networks. To solve the uncontrollable problem, this paper proposes a crosslayers random early detection (CLRED) scheme, which can control the nonresponding UDP-like flows rate effectively when congestion occurs in the access point (AP). The CLRED makes use of the MAC frame acknowledgement (ACK) transmitting congestion information to the sources nodes and utilizes the back-off windows of the MAC layer throttling data rate. Consequently, the UDP-like flows data rate can be restrained timely by the sources nodes in order to alleviate congestion in the complex networks. The proposed CLRED can constrain the nonresponsive flows availably and make the communication expedite, so that the network can sustain stable. The simulation results of network simulator-2 (NS2) verify the proposed CLRED scheme.

Multi-flow carrier aggregation (CA) has recently been considered to meet the increasing demand for high data rates. In this paper, we investigate the cross-layer performance of multi-flow CA for macro user equipments (MUEs) in the expanded range (ER) of small cells. We develop a fork/join (F/J) queuing analytical model that takes into account the time varying channels, the channel scheduling algorithm, partial CQI feedback and the number of component carriers deployed at each tier. Our model also accounts for stochastic packet arrivals and the packet scheduling mechanism. The analytical model developed in this paper can be used to gauge various packet-level performance parameters e.g., packet loss probability (PLP) and queuing delay. For the queuing delay, our model takes out-of-sequence packet delivery into consideration. The developed model can also be used to find the amount of CQI feedback and the packet scheduling of a particular MUE in order to offload as much traffic as possible from the macrocells to the small cells while maintaining the MUE\\'s quality of service (QoS) requirements.

Full Text Available This paper is focused on the problem of optimizing the aggregate throughput of the distributed coordination function (DCF employing the basic access mechanism at the data link layer of IEEE 802.11 protocols. We consider general operating conditions accounting for both nonsaturated and saturated traffic in the presence of transmission channel errors, as exemplified by the packet error rate . The main clue of this work stems from the relation that links the aggregate throughput of the network to the packet rate of the contending stations. In particular, we show that the aggregate throughput ( presents two clearly distinct operating regions that depend on the actual value of the packet rate with respect to a critical value , theoretically derived in this work. The behavior of ( paves the way to a cross-layer optimization algorithm, which proved to be effective for maximizing the aggregate throughput in a variety of network operating conditions. A nice consequence of the proposed optimization framework relies on the fact that the aggregate throughput can be predicted quite accurately with a simple, yet effective, closed-form expression. Finally, theoretical and simulation results are presented in order to unveil, as well as verify, the key ideas.

Full Text Available In this study, we developed network and throughput formulation models and proposed new method of the routing protocol algorithm with a cross-layer scheme based on signal-to-noise ratio (SNR. This method is an enhancement of routing protocol ad hoc on-demand distance vector (AODV. This proposed scheme uses selective route based on the SNR threshold in the reverse route mechanism. We developed AODV SNR-selective route (AODV SNR-SR for a mechanism better than AODV SNR, that is, the routing protocol that used average or sum of path SNR, and also better than AODV which is hop-count-based. We also used selective reverse route based on SNR mechanism, replacing the earlier method to avoid routing overhead. The simulation results show that AODV SNR-SR outperforms AODV SNR and AODV in terms of throughput, end-to-end delay, and routing overhead. This proposed method is expected to support Device-to-Device (D2D communications that are concerned with the quality of the channel awareness in the development of the future Fifth Generation (5G.

Full Text Available In wireless sensor networks (WSNs, there are numerous factors that may cause network congestion problems, such as the many-to-one communication modes, mutual interference of wireless links, dynamic changes of network topology and the memory-restrained characteristics of nodes. All these factors result in a network being more vulnerable to congestion. In this paper, a cross-layer active predictive congestion control scheme (CL-APCC for improving the performance of networks is proposed. Queuing theory is applied in the CL-APCC to analyze data flows of a single-node according to its memory status, combined with the analysis of the average occupied memory size of local networks. It also analyzes the current data change trends of local networks to forecast and actively adjust the sending rate of the node in the next period. In order to ensure the fairness and timeliness of the network, the IEEE 802.11 protocol is revised based on waiting time, the number of the node‟s neighbors and the original priority of data packets, which dynamically adjusts the sending priority of the node. The performance of CL-APCC, which is evaluated by extensive simulation experiments. is more efficient in solving the congestion in WSNs. Furthermore, it is clear that the proposed scheme has an outstanding advantage in terms of improving the fairness and lifetime of networks.

In wireless sensor networks (WSNs), there are numerous factors that may cause network congestion problems, such as the many-to-one communication modes, mutual interference of wireless links, dynamic changes of network topology and the memory-restrained characteristics of nodes. All these factors result in a network being more vulnerable to congestion. In this paper, a cross-layer active predictive congestion control scheme (CL-APCC) for improving the performance of networks is proposed. Queuing theory is applied in the CL-APCC to analyze data flows of a single-node according to its memory status, combined with the analysis of the average occupied memory size of local networks. It also analyzes the current data change trends of local networks to forecast and actively adjust the sending rate of the node in the next period. In order to ensure the fairness and timeliness of the network, the IEEE 802.11 protocol is revised based on waiting time, the number of the node's neighbors and the original priority of data packets, which dynamically adjusts the sending priority of the node. The performance of CL-APCC, which is evaluated by extensive simulation experiments. is more efficient in solving the congestion in WSNs. Furthermore, it is clear that the proposed scheme has an outstanding advantage in terms of improving the fairness and lifetime of networks.

Full Text Available 3G long term evolution (LTE introduces stringent needs in order to provide different kinds of traffic with Quality of Service (QoS characteristics. The major problem with this nature of LTE is that it does not have any paradigm scheduling algorithm that will ideally control the assignment of resources which in turn will improve the user satisfaction. This has become an open subject and different scheduling algorithms have been proposed which are quite challenging and complex. To address this issue, in this paper, we investigate how our proposed algorithm improves the user satisfaction for heterogeneous traffic, that is, best-effort traffic such as file transfer protocol (FTP and real-time traffic such as voice over internet protocol (VoIP. Our proposed algorithm is formulated using the cross-layer technique. The goal of our proposed algorithm is to maximize the expected total user satisfaction (total-utility under different constraints. We compared our proposed algorithm with proportional fair (PF, exponential proportional fair (EXP-PF, and U-delay. Using simulations, our proposed algorithm improved the performance of real-time traffic based on throughput, VoIP delay, and VoIP packet loss ratio metrics while PF improved the performance of best-effort traffic based on FTP traffic received, FTP packet loss ratio, and FTP throughput metrics.

The theoretical work presented in this manuscript addresses two complementary issues in coherent atom optics. The first part addresses the perspectives offered by coherent atomic sources through the design of two experiment involving the levitation of a cold atomic sample in a periodic series of light pulses, and for which coherent atomic clouds are particularly well-suited. These systems appear as multiple wave atom interferometers. A striking feature of these experiments is that a unique system performs both the sample trapping and interrogation. To obtain a transverse confinement, a novel atomic lens is proposed, relying on the interaction between an atomic wave with a spherical light wave. The sensitivity of the sample trapping towards the gravitational acceleration and towards the pulse frequencies is exploited to perform the desired measurement. These devices constitute atomic wave resonators in momentum space, which is a novel concept in atom optics. A second part develops new theoretical tools - most of which inspired from optics - well-suited to describe the propagation of coherent atomic sources. A phase-space approach of the propagation, relying on the evolution of moments, is developed and applied to study the low-energy dynamics of Bose-Einstein condensates. The ABCD method of propagation for atomic waves is extended beyond the linear regime to account perturbatively for mean-field atomic interactions in the atom-optical aberration-less approximation. A treatment of the atom laser extraction enabling one to describe aberrations in the atomic beam, developed in collaboration with the Atom Optics group at the Institute of Optics, is exposed. Last, a quality factor suitable for the characterization of diluted matter waves in a general propagation regime has been proposed. (author)

Recent developments in electronics and wireless communications have enabled the improvement of low-power and low-cost wireless sensors networks (WSNs). One of the most important challenges in WSNs is to increase the network lifetime due to the limited energy capacity of the network nodes. Another major challenge in WSNs is the hot spots that emerge as locations under heavy traffic load. Nodes in such areas quickly drain energy resources, leading to disconnection in network services. In such an environment, cross-layer cluster-based energy-efficient algorithms (CCBE) can prolong the network lifetime and energy efficiency. CCBE is based on clustering the nodes to different hexagonal structures. A hexagonal cluster consists of cluster members (CMs) and a cluster head (CH). The CHs are selected from the CMs based on nodes near the optimal CH distance and the residual energy of the nodes. Additionally, the optimal CH distance that links to optimal energy consumption is derived. To balance the energy consumption and the traffic load in the network, the CHs are rotated among all CMs. In WSNs, energy is mostly consumed during transmission and reception. Transmission collisions can further decrease the energy efficiency. These collisions can be avoided by using a contention-free protocol during the transmission period. Additionally, the CH allocates slots to the CMs based on their residual energy to increase sleep time. Furthermore, the energy consumption of CH can be further reduced by data aggregation. In this paper, we propose a data aggregation level based on the residual energy of CH and a cost-aware decision scheme for the fusion of data. Performance results show that the CCBE scheme performs better in terms of network lifetime, energy consumption and throughput compared to low-energy adaptive clustering hierarchy (LEACH) and hybrid energy-efficient distributed clustering (HEED).

Full Text Available Recent developments in electronics and wireless communications have enabled the improvement of low-power and low-cost wireless sensors networks (WSNs. One of the most important challenges in WSNs is to increase the network lifetime due to the limited energy capacity of the network nodes. Another major challenge in WSNs is the hot spots that emerge as locations under heavy traffic load. Nodes in such areas quickly drain energy resources, leading to disconnection in network services. In such an environment, cross-layer cluster-based energy-efficient algorithms (CCBE can prolong the network lifetime and energy efficiency. CCBE is based on clustering the nodes to different hexagonal structures. A hexagonal cluster consists of cluster members (CMs and a cluster head (CH. The CHs are selected from the CMs based on nodes near the optimal CH distance and the residual energy of the nodes. Additionally, the optimal CH distance that links to optimal energy consumption is derived. To balance the energy consumption and the traffic load in the network, the CHs are rotated among all CMs. In WSNs, energy is mostly consumed during transmission and reception. Transmission collisions can further decrease the energy efficiency. These collisions can be avoided by using a contention-free protocol during the transmission period. Additionally, the CH allocates slots to the CMs based on their residual energy to increase sleep time. Furthermore, the energy consumption of CH can be further reduced by data aggregation. In this paper, we propose a data aggregation level based on the residual energy of CH and a cost-aware decision scheme for the fusion of data. Performance results show that the CCBE scheme performs better in terms of network lifetime, energy consumption and throughput compared to low-energy adaptive clustering hierarchy (LEACH and hybrid energy-efficient distributed clustering (HEED.

Operations management should be more concerned with the relationship to design and how the interplay between design processes and operations can be managed. The design of products and services has huge implications on operations in different ways: Design can increase the value of products......; influence and lead to innovation of manufacturing processes; implications for the supply chain processes and has implications on the life cycle of products and sustainability issues. To fully exploit the opportunities, we claim that it's useful for managers to be aware of the different ways that design...... processes might be perceived and managed. Illustrated with examples....

It was recently shown that delta-sigma quantization (DSQ) can be used for optimal multiple description (MD) coding of Gaussian sources. The DSQ scheme combined oversampling, prediction, and noise-shaping in order to trade off side distortion for central distortion in MD coding. It was shown that ...

This case study explores how a Chinese-American novice teacher acted as mediator in a telecollaboration with student teacher (ST) peers in the USA who designed tasks for his English as a foreign language (EFL) learners in China. The novice teacher was instrumental in mediating the student teachers' task design process by providing feedback…

In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

Full Text Available In wireless sensor networks (WSNs, communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R and COOR(P of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b the reliability can be improved since it is the product of the reliability of every hop of the

The IEEE 802.11 MAC standard for wireless ad hoc networks adopts Binary Exponential Back-off (BEB) mechanism to resolve bandwidth contention between stations. BEB mechanism controls the bandwidth allocation for each station by choosing a back-off value from one to CW according to the uniform random distribution, where CW is the contention window size. However, in asymmetric multi-hop networks, some stations are disadvantaged in opportunity of access to the shared channel and may suffer severe throughput degradation when the traffic load is large. Then, the network performance is degraded in terms of throughput and fairness. In this paper, we propose a new cross-layer scheme aiming to solve the per-flow unfairness problem and achieve good throughput performance in IEEE 802.11 multi-hop ad hoc networks. Our cross-layer scheme collects useful information from the physical, MAC and link layers of own station. This information is used to determine the optimal Contention Window (CW) size for per-station fairness. We also use this information to adjust CW size for each flow in the station in order to achieve per-flow fairness. Performance of our cross-layer scheme is examined on various asymmetric multi-hop network topologies by using Network Simulator (NS-2).

This design study investigated the use of multiplication and division problems to help 5-year-old children develop an early understanding of multiplication and division. One teacher and her class of 15 5-year-old children were involved in a collaborative partnership with the researchers. The design study was conducted over two 4-week periods in…

: In a controlled multiple case design study, the development of a therapeutic relationship and its role in affect regulation were studied in 6 children with visual disabilities, severe intellectual disabilities, severe challenging behavior, and prolonged social deprivation. In the 1st phase,

Two systems for seismic base isolation are presented. The main feature of these system is that, instead of only one isolation frequency as in conventional isolation systems, they are designed to have two distinct isolation frequencies. When the responses during an earthquake exceed the design value(s), the system will automatically and passively shift to the secondly isolation frequency. Responses of these two systems to different ground motions including a harmonic motion with frequency same as the primary isolation frequency, show that no excessive amplification will occur. Adoption of these new systems certainly will greatly enhance the safety and reliability of an isolated superstructure against future strong earthquakes. 3 refs

We illustrate the utility of some recently derived transfer matrix methods for electromagnetic scattering calculations in systems composed of coated spherical scatterers. Any of the spherical coatings, cores, or host media may be composed of absorbing materials. Our formulae permit the calculation of local absorption in either orientation fixed or orientation averaged situations. We introduce methods for estimating the macroscopic transport properties of such media, and show how our scattering calculations can permit 'design' optimization of macroscopic properties

The present invention includes a radiation hardened sequential circuit, such as a bistable circuit, flip-flop or other suitable design that presents substantial immunity to ionizing radiation while simultaneously maintaining a low operating voltage. In one embodiment, the circuit includes a plurality of logic elements that operate on relatively low voltage, and a master and slave latches each having storage elements that operate on a relatively high voltage.

A method is developed to design a tank reactor in which a network of reactions is carried out. The network is a combination of parallel and consecutive reactions. The method ensures unique operation. Dimensionless groups are used which are either representative of properties of the reaction system

Full Text Available A multi-goal layout problem may be formulated as a Quadratic Assignment model, considering multiple goals (or factors, both qualitative and quantitative in the objective function. The facilities layout problem, in general, varies from the location and layout of facilities in manufacturing plant to the location and layout of textual and graphical user interface components in the human–computer interface. In this paper, we propose two alternate mathematical approaches to the single-objective layout model. The first one presents a multi-goal user interface component layout problem, considering the distance-weighted sum of congruent objectives of closeness relationships and the interactions. The second one considers the distance-weighted sum of congruent objectives of normalized weighted closeness relationships and normalized weighted interactions. The results of first approach are compared with that of an existing single objective model for example task under consideration. Then, the results of first approach and second approach of the proposed model are compared for the example task under consideration.

Recently there has been a lot of interest in developing network communication schemes for carrying digital data between locally distributed computing stations. Many of these schemes have focused on distributed networking techniques for data processing applications. These applications suggest the use of a serial, multipoint bus, where a number of remote intelligent units act as slaves to a central or host computer. Each slave would be serially addressable from the host and would perform required operations upon being addressed by the host. Based on an MK3873 single-chip microcomputer, the SCU 20 is designed to be such a remote slave device. The capabilities of the SCU 20 and its use in systems applications are examined.

Software architecture design is challenging, especially for junior software designers. Lacking practice and experience, junior designers need process support in order to make rational architecture decisions. In this paper, we present the results of a comparative multiple-case study conducted to find

A finite element based programming system for minimum weight design of a truss-type structure subjected to displacement, stress, and lower and upper bounds on design variables is presented. The programming system consists of a number of independent processors, each performing a specific task. These processors, however, are interfaced through a well-organized data base, thus making the tasks of modifying, updating, or expanding the programming system much easier in a friendly environment provided by many inexpensive personal computers. The proposed software can be viewed as an important step in achieving a 'dummy' finite element for optimization. The programming system has been implemented on both large and small computers (such as VAX, CYBER, IBM-PC, and APPLE) although the focus is on the latter. Examples are presented to demonstrate the capabilities of the code. The present programming system can be used stand-alone or as part of the multilevel decomposition procedure to obtain optimum design for very large scale structural systems. Furthermore, other related research areas such as developing optimization algorithms (or in the larger level: a structural synthesis program) for future trends in using parallel computers may also benefit from this study.

Abstract Different communication schemes for Wireless Body Area Networks (WBAN) pretend to achieve a fair tradeoff between efficient energy consumption and the accomplishment of performance metrics. Among those schemes are the Cross-layer protocols that constitute a good choice to achieve the aforementioned tradeoff by introducing novel protocol techniques which are away from the traditional communications model. In this work we assessed the performance of a WBAN cross-layer protocol stack by...

This study examined the effects of a graphing task analysis using the Microsoft[R] Office Excel 2007 program on the single-subject multiple baseline graphing skills of three university graduate students. Using a multiple probe across participants design, the study demonstrated a functional relationship between the number of correct graphing…

People with multiple chronic conditions often struggle with managing their health. The purpose of this research was to identify specific challenges of patients with multiple chronic conditions and to use the findings to form design principles for a telemonitoring system tailored for these patients. Semi-structured interviews with 15 patients with multiple chronic conditions and 10 clinicians were conducted to gain an understanding of their needs and preferences for a smartphone-based telemonitoring system. The interviews were analyzed using a conventional content analysis technique, resulting in six themes. Design principles developed from the themes included that the system must be modular to accommodate various combinations of conditions, reinforce a routine, consolidate record keeping, as well as provide actionable feedback to the patients. Designing an application for multiple chronic conditions is complex due to variability in patient conditions, and therefore, design principles developed in this study can help with future innovations aimed to help manage this population.

Full Text Available The performance of wireless local area networks supporting video streaming applications, based on MPEG-2 video codec, in the presence of interference is here dealt with. IEEE 802.11g standard wireless networks, that do not support QoS in according with IEEE 802.11e standard, are, in particular, accounted for and Bluetooth signals, additive white Gaussian noise, and competitive data traffic are considered as sources of interference. The goal is twofold: from one side, experimentally assessing and correlating the values that some performance metrics assume at the same time at different layers of an IEEE 802.11g WLAN delivering video streaming in the presence of in-channel interference; from the other side, deducing helpful and practical hints for designers and technicians, in order to efficiently assess and enhance the performance of an IEEE 802.11g WLAN supporting video streaming in some suitable setup conditions and in the presence of interference. To this purpose, an experimental analysis is planned following a cross-layer measurement approach, and a proper testbed within a semianechoic chamber is used. Valuable results are obtained in terms of signal-to-interference ratio, packet loss ratio, jitter, video quality, and interference data rate; helpful hints for designers and technicians are finally gained.

Most studies on offshore wind farm design assume a uniform wind farm, which consists of an identical type of wind turbines. In order to further reduce the cost of energy, we investigate the design of non-uniform offshore wind farms, i.e., wind farms with multiple types of wind turbines and hub-he...

A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

Software architecture design is challenging, especially for junior software designers. Lacking practice and experience, junior designers need process support in order to make rational architecture decisions. In this paper, we present the results of a comparative multiple-case study conducted to find out if decision viewpoints from van Heesch et al. (2012, in press) can provide such a support. The case study was conducted with four teams of software engineering students working in industrial s...

To adapt the 5G communication, the cloud radio access network is a paradigm introduced by operators which aggregates all base stations computational resources into a cloud BBU pool. The interaction between RRH and BBU or resource schedule among BBUs in cloud have become more frequent and complex with the development of system scale and user requirement. It can promote the networking demand among RRHs and BBUs, and force to form elastic optical fiber switching and networking. In such network, multiple stratum resources of radio, optical and BBU processing unit have interweaved with each other. In this paper, we propose a novel multiple stratum optimization (MSO) architecture for cloud-based radio over optical fiber networks (C-RoFN) with software defined networking. Additionally, a global evaluation strategy (GES) is introduced in the proposed architecture. MSO can enhance the responsiveness to end-to-end user demands and globally optimize radio frequency, optical spectrum and BBU processing resources effectively to maximize radio coverage. The feasibility and efficiency of the proposed architecture with GES strategy are experimentally verified on OpenFlow-enabled testbed in terms of resource occupation and path provisioning latency.

Full Text Available Optimization of energy consumption in Wireless Sensor Network (WSN nodes has become a critical link that constrains the engineering application of the smart grid due to the fact that the smart grid is characterized by long-distance transmission in a special environment. The paper proposes a linear hierarchical network topological structure specific to WSN energy conservation in environmental monitoring of the long-distance electric transmission lines in the smart grid. Based on the topological structural characteristics and optimization of network layers, the paper also proposes a Topological Structure be Layered Configurations (TSLC routing algorithm to improve the quality of WSN data transmission performance. Coprocessing of the network layer and the media access control (MAC layer is achieved by using the cross-layerdesign method, accessing the status for the nodes in the network layer and obtaining the status of the network nodes of the MAC layer. It efficiently saves the energy of the whole network, improves the quality of the network service performance, and prolongs the life cycle of the network.

Full Text Available Abstract In the IEEE 802.11 wireless LANs, the bandwidth is not fairly shared among stations due to the distributed coordination function (DCF mechanism in the IEEE 802.11 MAC protocol. It introduces the per-flow and per-station unfairness problems between uplink and downlink flows, as the uplink flows usually dominate the downlink flows. In addition, some users may use greedy applications such as video streaming, which may prevent other applications from connecting to the Internet. In this article, we propose an adaptive cross-layer bandwidth allocation mechanism to provide per-station and per-flow fairness. To verify the effectiveness and scalability, our scheme is implemented on a wireless access router and numerous experiments in a typical wireless environment with both TCP and UDP traffic are conducted to evaluate performance of the proposed scheme.

Multiple load cases and the consideration of strength is a reality that most structural designs are exposed to. Improved possibility to produce specific materials, say by fiber lay-up, put focus on research on free material optimization. A formulation for such design problems together with a prac......Multiple load cases and the consideration of strength is a reality that most structural designs are exposed to. Improved possibility to produce specific materials, say by fiber lay-up, put focus on research on free material optimization. A formulation for such design problems together...... with a practical recursive design procedure is presented and illustrated with examples. The presented finite element analysis involve many elements as well as many load cases. Separating the local amount of material from a description with unit trace for the local anisotropy, gives the free materials formulation...

Full Text Available The new consistent scheme for designation of objects in binary and multiple systems, BSDB, is described. It was developed in the frame of the Binary star DataBase, BDB (http://www.inasan.ru, due to necessity of a unified and consistent system for designation of objects in the database, and the name of the designation scheme was derived from that of the database. The BSDB scheme covers all types of observational data. Three classes of objects introduced within the BSDB nomenclature provide correct links between objects and data, what is especially important for complex multiple stellar systems. The final stage of establishing the BSDB scheme is compilation of the Identification List of Binaries, ILB, where all known objects in binary and multiple stars are presented with their BSDB identifiers along with identifiers according to major catalogues and lists.

the optimal boundary points in closed form to choose the AMC transmission modes by taking into account the channel state information from the secondary transmitter to both the primary receiver and the secondary receiver. Moreover, numerical results

works addressed the link layer performance of AM with truncated ARQ but without packet combining. In addition, previously proposed AM algorithms are not optimal and can provide poor performance when packet combining is implemented. Herein, we first show

The general design plan for the implementation of a common user interface to multiple remote information systems within a microcomputer-based environment is presented. The intent is to provide a framework for the development of detailed specifications which will be used as guidelines for the actual development of the system.

The design plan for the personal computer multiple information system interface (PC/MISI) project is discussed. The document is intended to be used as a blueprint for the implementation of the system. Each component is described in the detail necessary to allow programmers to implement the system. A description of the system data flow and system file structures is given.

The guinea pig maximization test (GPMT) is usually performed with one moderately irritant induction dose of the allergen and gives a qualitative assessment-hazard identification-of the allergenicity of the chemical. We refined the GPMT by applying a multiple dose design and used 30 guinea pigs in...

In this paper, we present a design of a smart textile pressure mat to study the pressure distribution with multiple foam material configurations for neonatal monitoring at Neonatal Intensive Care Units (NICU). A smart textile mat with 64 pressure sensors has been developed including software at the

The guinea pig maximization test (GPMT) is usually performed with one moderately irritant induction dose of the allergen and gives a qualitative assessment-hazard identification-of the allergenicity of the chemical. We refined the GPMT by applying a multiple dose design and used 30 guinea pigs...

data from time domain simulations. Then a coordinated approach for multiple PSS selection and parameter design based on residue method is proposed and realized in MATLAB m-files. Particle swarm optimization (PSO) is adopted in the coordination process. The IEEE 39-bus New England system model...

This paper treats the selection of controller gains of a servo system as a multiple-criteria decision problem. In contrast to the usual optimization-based approaches to computer-aided design, inequality constraints are included in the problem as unconstrained objectives. This considerably simplifies

Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks.

Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.

In this paper we present an architecture for designing web pages that uses multiple mobile and stationary devices to present web content. The architecture extends standard web technology with a number of functions for expressing how web content might migrate and use multiple displays....... The architecture is developed to support desktop applications, but in this paper we describe how the architecture can be extended to mobile devices by using AJAX technology. The paper also presents an implementation and presents a number of applications for mobile devices developed with this framework....

Full Text Available Decision-makers focus on representing biodiversity pattern, maintaining connectivity, and strengthening resilience to global warming when designing marine protected area (MPA systems, especially in coral reef ecosystems. The achievement of these broad conservation objectives will likely require large areas, and stretch limited funds for MPA implementation. We undertook a spatial prioritisation of Brazilian coral reefs that considered two types of conservation zones (i.e. no-take and multiple use areas and integrated multiple conservation objectives into MPA planning, while assessing the potential impact of different sets of objectives on implementation costs. We devised objectives for biodiversity, connectivity, and resilience to global warming, determined the extent to which existing MPAs achieved them, and designed complementary zoning to achieve all objectives combined in expanded MPA systems. In doing so, we explored interactions between different sets of objectives, determined whether refinements to the existing spatial arrangement of MPAs were necessary, and tested the utility of existing MPAs by comparing their cost effectiveness with an MPA system designed from scratch. We found that MPAs in Brazil protect some aspects of coral reef biodiversity pattern (e.g. threatened fauna and ecosystem types more effectively than connectivity or resilience to global warming. Expanding the existing MPA system was as cost-effective as designing one from scratch only when multiple objectives were considered and management costs were accounted for. Our approach provides a comprehensive assessment of the benefits of integrating multiple objectives in the initial stages of conservation planning, and yields insights for planners of MPAs tackling multiple objectives in other regions.

Full Text Available Most of the existing works have been evaluated the performance of 802.11 multihop networks by considering the MAC layer or network layer separately. Knowing the nature of the multi-hop ad hoc networks, many factors in different layers are crucial for study the performance of MANET. In this paper we present a new analytic model for evaluating average end-to-end throughput in IEEE 802.11e multihop wireless networks. In particular, we investigate an intricate interaction among PHY, MAC and Network layers. For instance, we incorporate carrier sense threshold, transmission power, contention window size, retransmissions retry limit, multi rates, routing protocols and network topology together. We build a general cross-layered framework to represent multi-hop ad hoc networks with asymmetric topology and asymmetric traffic. We develop an analytical model to predict throughput of each connection as well as stability of forwarding queues at intermediate nodes in saturated networks. To the best of our knowledge, it seems that our work is the first wherein general topology and asymmetric parameters setup are considered in PHY/MAC/Network layers. Performance of such a system is also evaluated through simulation. We show that performance measures of the MAC layer are affected by the traffic intensity of flows to be forwarded. More precisely, attempt rate and collision probability are dependent on traffic flows, topology and routing.

This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols.

Great strides have been made exploring and exploiting new and different sources of disease surveillance data and developing robust statistical methods for analyzing the collected data. However, there has been less research in the area of dissemination. Proper dissemination of surveillance data can facilitate the end user's taking of appropriate actions, thus maximizing the utility of effort taken from upstream of the surveillance-to-action loop. The aims of the study were to develop a generic framework for a digital dashboard incorporating features of efficient dashboard design and to demonstrate this framework by specific application to influenza surveillance in Hong Kong. Based on the merits of the national websites and principles of efficient dashboard design, we designed an automated influenza surveillance digital dashboard as a demonstration of efficient dissemination of surveillance data. We developed the system to synthesize and display multiple sources of influenza surveillance data streams in the dashboard. Different algorithms can be implemented in the dashboard for incorporating all surveillance data streams to describe the overall influenza activity. We designed and implemented an influenza surveillance dashboard that utilized self-explanatory figures to display multiple surveillance data streams in panels. Indicators for individual data streams as well as for overall influenza activity were summarized in the main page, which can be read at a glance. Data retrieval function was also incorporated to allow data sharing in standard format. The influenza surveillance dashboard serves as a template to illustrate the efficient synthesization and dissemination of multiple-source surveillance data, which may also be applied to other diseases. Surveillance data from multiple sources can be disseminated efficiently using a dashboard design that facilitates the translation of surveillance information to public health actions.

In this letter, a scheduling scheme based on Dynamic Frequency Clocking (DFC) and multiple voltages is proposed for low power designs under the timing and the resource constraints.Unlike the conventional methods at high level synthesis where only voltages of nodes were considered,the scheme based on a gain function considers both voltage and frequency simultaneously to reduce energy consumption. Experiments with a number of DSP benchmarks show that the proposed scheme achieves an effective energy reduction.

Collaborative problem solving in e-learning can take in the form of discussion among learner, creating a highly social learning environment and characterized by participation and interactivity. This paper, designed a collaborative learning environment where agent act as co-learner, can play different roles during interaction. Since different roles have been assigned to the agent, learner will assume that multiple co-learner exists to help and guide him all throughout the ...

An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

More and more image informatics researchers and engineers are considering to re-construct imaging and informatics infrastructure or to build new framework to enable multiple disciplines of medical researchers, clinical physicians and biomedical engineers working together in a secured, efficient, and transparent cooperative environment. In this presentation, we show an outline and our preliminary design work of building an e-Science platform for biomedical imaging and informatics research and application in Shanghai. We will present our consideration and strategy on designing this platform, and preliminary results. We also will discuss some challenges and solutions in building this platform.

The PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory is being constructed to investigate a phase of matter termed the quark-gluon plasma. The plasma will be produced through the collision of two heavy ions. The multiplicity and vertex detector (MVD) located in the center of PHENIX will characterize the events, determine the collision point, and act as a central trigger. This report presents the final mechanical designs of the cooling systems for the Multiplicity and Vertex Detector (MVD). In particular, the design procedure and layouts are discussed for two different air cooling systems for the multichip modules and MVD enclosure, and a liquid cooling system for the low dropout voltage regulators. First of all, experimental prototype cooling system test results used to drive the final mechanical designs are summarized and discussed. Next, the cooling system requirements and design calculation for the various subsystem components are presented along with detailed lists of supply vendors, components, and costs. Finally, safety measures incorporated in the final mechanical design and operation procedures for each of the subsystems are detailed

The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

A multiple-objective optimal design is useful for dose-response studies because it can incorporate several objectives at the design stage. Objectives can be of varying interests and a properly constructed multiple-objective optimal design can provide user-specified efficiencies, delivering higher efficiencies for the more important objectives. In this work, we introduce the VNM package written in R for finding 3-objective locally optimal designs for the 4-parameter logistic (4PL) model widely...

Full Text Available Many multiple testing procedures (MTP have been developed in recent years. Among these new procedures, the graphical approach is flexible and easy to communicate with non-statisticians. A hypothetical Phase III clinical trial design is introduced in this manuscript to demonstrate how graphical approach can be applied in clinical product development. In this design, an active comparator is used. It is thought that this test drug under development could potentially be superior to this comparator. For comparison of efficacy, the primary endpoint is well established and widely accepted by regulatory agencies. However, an important secondary endpoint based on Phase II findings looks very promising. The target dose may have a good opportunity to deliver superiority to the comparator. Furthermore, a lower dose is included in case the target dose may demonstrate potential safety concerns. This Phase III study is designed as a non-inferiority trial with two doses, and two endpoints. This manuscript will illustrate how graphical approach is applied to this design in handling multiple testing issues.

Full Text Available This paper presents a methodology that has been applied for a design process of anthropomorphic hands with multiple fingers. Biomechanical characteristics of human hand have been analysed so that ergonomic and anthropometric aspects have been used as fundamental references for obtaining grasping mechanisms. A kinematic analysis has been proposed to define the requirements for designing grasping functions. Selection of materials and actuators has been discussed too. This topic has been based on previous experiences with prototypes that have been developed at the Laboratory of Robotics and Mechatronics (LARM of the University of Cassino. An example of the application of the proposed method has been presented for the design of a first prototype of LARM Hand.

The installation and debugging of optical circuit structure in the application of carbon monoxide distributed laser gas analysis and measurement, there are difficult key technical problems. Based on the three-component expansion theory, multi-multiple expander structure with expansion ratio of 4, 5, 6 and 7 is adopted in the absorption chamber to enhance the adaptability of the installation environment of the gas analysis and measurement device. According to the basic theory of aberration, the optimal design of multi-multiple beam expander structure is carried out. By using image quality evaluation method, the difference of image quality under different magnifications is analyzed. The results show that the optical quality of the optical system with the expanded beam structure is the best when the expansion ratio is 5-7.

Seismic design of nuclear power plants as currently practiced requires time history analyses be performed to generate floor response spectra for seismic qualification of piping, equipment, and components. Since design response spectra are normally prescribed in the form of smooth spectra, the generation of synthetic time histories whose response spectra closely match the ''target'' design spectra of multiple damping values, is often required for the seismic time history analysis purpose. Various methods of generation of synthetic time histories compatible with target response spectra have been proposed in the literature. Since the mathematical problem of determining a time history from a given set of response spectral values is not unique, an exact solution is not possible, and all the proposed methods resort to some forms of approximate solutions. In this paper, a new iteration scheme, is described which effectively removes the difficulties encountered by the existing methods. This new iteration scheme can not only improve the accuracy of spectrum matching for a single-damping target spectrum, but also automate the spectrum matching for multiple-damping target spectra. The applicability and limitations as well as the method adopted to improve the numerical stability of this new iteration scheme are presented. The effectiveness of this new iteration scheme is illustrated by two example applications

Accelerated life testing (ALT) is used to obtain failure time data quickly under high stress levels in order to predict product life performance under design stress conditions. Most of the previous work on designing ALT plans is focused on the application of a single stress. However, as components or products become more reliable due to technological advances, it becomes more difficult to obtain significant amount of failure data within reasonable amount of time using single stress only. Multiple-stress-type ALTs have been employed as a means of overcoming such difficulties. In this paper, we design optimum multiple-stress-type ALT plans based on the proportional hazards model. The optimum combinations of stresses and their levels are determined such that the variance of the reliability estimate of the product over a specified period of time is minimized. The use of the model is illustrated using numerical example, and sensitivity analysis shows that the resultant optimum ALT plan is robust to the deviation in model parameters

Full Text Available We present a framework for cross-layer optimized real time multiuser encoding of video using a single layer H.264/AVC and transmission over MIMO wireless channels. In the proposed cross-layer adaptation, the channel of every user is characterized by the probability density function of its channel mutual information and the performance of the H.264/AVC encoder is modeled by a rate distortion model that takes into account the channel errors. These models are used during the resource allocation of the available slots in a TDMA MIMO communication system with capacity achieving channel codes. This framework allows for adaptation to the statistics of the wireless channel and to the available resources in the system and utilization of the multiuser diversity of the transmitted video sequences. We show the effectiveness of the proposed framework for video transmission over Rayleigh MIMO block fading channels, when channel distribution information is available at the transmitter.

In this paper, the authors report on work performed to date on JIGSAW - a self contained data acquisition, display and analysis system designed to collect data form multiple gamma-ray detectors. The data acquisition system utilizes commercially available VMEbus and NIM hardware modules and the VME exec real time operating system. A Unix based software package, written in ANSI standard C and with the XII graphics routines, allows the user to view the acquired spectra. Analysis of the histograms can be performed in background during the run with the ROBFIT suite of curve fitting routines

In systems engineering, design and operation of systems are two main problems which always attract researcher's attentions. The accomplishment of activities in these problems often requires proper decisions to be made so that the desired goal can be achieved, thus, decision making needs to be carefully fulfilled in the design and operation of systems. Design is a decision making process which permeates through out the design process, and is at the core of all design activities. In modern aircraft design, more and more attention is paid to the conceptual and preliminary design phases so as to increase the odds of choosing a design that will ultimately be successful at the completion of the design process, therefore, decisions made during these early design stages play a critical role in determining the success of a design. Since aerospace systems are complex systems with interacting disciplines and technologies, the Decision Makers (DMs) dealing with such design problems are involved in balancing the multiple, potentially conflicting attributes/criteria, transforming a large amount of customer supplied guidelines into a solidly defined set of requirement definitions. Thus, one could state with confidence that modern aerospace system design is a Multiple Criteria Decision Making (MCDM) process. A variety of existing decision making methods are available to deal with this type of decision problems. The selection of the most appropriate decision making method is of particular importance since inappropriate decision methods are likely causes of misleading engineering design decisions. With no sufficient knowledge about each of the methods, it is usually difficult for the DMs to find an appropriate analytical model capable of solving their problems. In addition, with the complexity of the decision problem and the demand for more capable methods increasing, new decision making methods are emerging with time. These various methods exacerbate the difficulty of the selection

Full Text Available The design, dynamic modelling, and workspace are presented in this paper concerning cooperative cable parallel manipulators for multiple mobile cranes (CPMMCs. The CPMMCs can handle complex tasks that are more difficult or even impossible for a single mobile crane. Kinematics and dynamics of the CPMMCs are studied on the basis of geometric methodology and d'Alembert's principle, and a mathematical model of the CPMMCs is developed and presented with dynamic simulation. The constant orientation workspace analysis of the CPMMCs is carried out additionally. As an example, a cooperative cable parallel manipulator for triple mobile cranes with 6 Degrees of Freedom is investigated on the basis of the above design objectives.

The peptidic anti-HIV drug T20 (Fuzeon) and its analog C34 share a common heptad repeat (HR) sequence, but they have different functional domains, i.e., pocket- and lipid-binding domains (PBD and LBD, respectively). We hypothesize that novel anti-HIV peptides may be designed by using artificial sequences containing multiple copies of HR motifs plus zero, one or two functional domains. Surprisingly, we found that the peptides containing only the non-natural HR sequences could significantly inhibit HIV-1 infection, while addition of PBD and/or LBD to the peptides resulted in significant improvement of anti-HIV-1 activity. These results suggest that these artificial HR sequences, which may serve as structural domains, could be used as templates for the design of novel antiviral peptides against HIV and other viruses with class I fusion proteins

Staphylococcus aureus is a major cause of healthcare-associated infections and is responsible for a substantial burden of disease in hospitalized patients. Despite increasingly rigorous infection control guidelines, the prevalence and corresponding negative impact of S. aureus infections remain considerable. Difficulties in controlling S. aureus infections as well as the associated treatment costs are exacerbated by increasing rates of resistance to available antibiotics. Despite ongoing efforts over the past 20 years, no licensed S. aureus vaccine is currently available. However, learnings from past clinical failures of vaccine candidates and a better understanding of the immunopathology of S. aureus colonization and infection have aided in the design of new vaccine candidates based on multiple important bacterial pathogenesis mechanisms. This review outlines important considerations in designing a vaccine for the prevention of S. aureus disease in healthcare settings. PMID:22922765

Full Text Available The paper deals with the seismic retrofit of a multiple building structure belonging to the Hospital Centre of Avellino (Italy. At first, the paper presents the preliminary investigations, the in situ measurements and laboratory tests, and the seismic assessment of the existing fixed-base structures. Having studied different strategies, base isolation proved to be the more appropriate, also for the possibility offered by the geometry of the building to easily create an isolation interface at the ground level. The paper presents the design project, the construction process, and the details of the isolation intervention. Some specific issues of base isolation for seismic retrofitting of multiple building structures were lightened. Finally, the seismic assessment of the base-isolated building was carried out. The seismic response was evaluated through nonlinear time-history analysis, using the well-known Bouc-Wen model as the constitutive law of the isolation bearings. For reliable dynamic analyses, a suite of natural accelerograms compatible with acceleration spectra of Italian Code was first selected and then applied along both horizontal directions. The results were finally used to address some of the critical issues of the seismic response of the base-isolated multiple building structure: accidental torsional effects and potential poundings during strong earthquakes.

Full Text Available The interaction between humans and robots is undergoing an evolution. Progress in this evolution means that humans are close to robustly deploying multiple robots. Urban search and rescue (USAR can benefit greatly from such capability. The review shows that with state of the art artificial intelligence, robots can work autonomously but still require human supervision. It also shows that multiple robot deployment (MRD is more economical, shortens mission durations, adds reliability as well as addresses missions impossible with one robot and payload constraints. By combining robot autonomy and human supervision, the benefits of MRD can be applied to USAR while at the same time minimizing human exposure to danger. This is achieved with a single-human multiple-robot system (SHMRS. However, designers of the SHMRS must consider key attributes such as the size, composition and organizational structure of the robot collective. Variations in these attributes also induce fluctuations in issues within SHMRS deployment such as robot communication and computational load as well as human cognitive workload and situation awareness (SA. Research is essential to determine how the attributes can be manipulated to mitigate these issues while meeting the requirements of the USAR mission.

Full Text Available The interaction between humans and robots is undergoing an evolution. Progress in this evolution means that humans are close to robustly deploying multiple robots. Urban search and rescue (USAR can benefit greatly from such capability. The review shows that with state of the art artificial intelligence, robots can work autonomously but still require human supervision. It also shows that multiple robot deployment (MRD is more economical, shortens mission durations, adds reliability as well as addresses missions impossible with one robot and payload constraints. By combining robot autonomy and human supervision, the benefits of MRD can be applied to USAR while at the same time minimizing human exposure to danger. This is achieved with a single-human multiple-robot system (SHMRS. However, designers of the SHMRS must consider key attributes such as the size, composition and organizational structure of the robot collective. Variations in these attributes also induce fluctuations in issues within SHMRS deployment such as robot communication and computational load as well as human cognitive workload and situation awareness (SA.Research is essential to determine how the attributes can be manipulated to mitigate these issues while meeting the requirements of the USAR mission.

This paper describes an optical direct-detection multiple access communications system for free-space satellite networks utilizing code-division multiple-access (CDMA) and forward error correction (FEC) coding. System performance is characterized by how many simultaneous users operating at data rate R can be accommodated in a signaling bandwidth W. The performance of two CDMA schemes, optical orthogonal codes (OOC) with FEC and orthogonal convolutional codes (OCC), is calculated and compared to information-theoretic capacity bounds. The calculations include the effects of background and detector noise as well as nonzero transmitter extinction ratio and power imbalance among users. A system design for 10 kbps multiple-access communications between low-earth orbit satellites is given. With near- term receiver technology and representative system losses, a 15 W peak-power transmitter provides 10-6 BER performance with seven interfering users and full moon background in the receiver FOV. The receiver employs an array of discrete wide-area avalanche photodiodes (APD) for wide field of view coverage. Issues of user acquisition and synchronization, implementation technology, and system scalability are also discussed.

Lansoprazole (LSP), a proton-pump inhibitor, belongs to class II drug. It is especially instable to heat, light, and acidic media, indicating that fabrication of a formulation stabilizing the drug is difficult. The addition of alkaline stabilizer is the most powerful method to protect the drug in solid formulations under detrimental environment. The purpose of the study was to characterize the designedmultiple coating pellets of LSP containing an alkaline stabilizer (sodium carbonate) and assess the effect of the stabilizer on the physicochemical properties of the drug. The coated pellets were prepared by layer-layer film coating with a fluid-bed coater. In vitro release and acid-resistance studies were carried out in simulated gastric fluid and simulated intestinal fluid, respectively. Furthermore, the moisture-uptake test was performed to evaluate the influence of sodium carbonate on the drug stability. The results indicate that the drug exists in the amorphous state or small (nanometer size) particles without crystallization even after storage at 40°C/75% for 5 months. The addition of sodium carbonate to the pellet protects the drug from degradation in simulated gastric fluid in a dose-dependent manner. The moisture absorbed into the pellets has a detrimental effect on the drug stability. The extent of drug degradation is directly correlated with the content of moisture absorption. In conclusion, these results suggest that the presence of sodium carbonate influence the physicochemical properties of LSP, and the designedmultiple coating pellets enhance the drug stability.

Background Multiple sclerosis (MS) is one of the world’s most common neurologic disorders. Fatigue is one of most common symptoms that persons with MS experience, having significant impact on their quality of life and limiting their activity levels. Self-management strategies are used to support them in the care of their health. Mobile health (mHealth) solutions are a way to offer persons with chronic conditions tools to successfully manage their symptoms and problems. Gamification is a current trend among mHealth apps used to create engaging user experiences and is suggested to be effective for behavioral change. To be effective, mHealth solutions need to be designed to specifically meet the intended audience needs. User-centered design (UCD) is a design philosophy that proposes placing end users’ needs and characteristics in the center of design and development, involving users early in the different phases of the software life cycle. There is a current gap in mHealth apps for persons with MS, which presents an interesting area to explore. Objective The purpose of this study was to describe the design and evaluation process of a gamified mHealth solution for behavioral change in persons with MS using UCD. Methods Building on previous work of our team where we identified needs, barriers, and facilitators for mHealth apps for persons with MS, we followed UCD to design and evaluate a mobile app prototype aimed to help persons with MS self-manage their fatigue. Design decisions were evidence-driven and guided by behavioral change models (BCM). Usability was assessed through inspection methods using Nielsen’s heuristic evaluation. Results The mHealth solution More Stamina was designed. It is a task organization tool designed to help persons with MS manage their energy to minimize the impact of fatigue in their day-to-day life. The tool acts as a to-do list where users can input tasks in a simple manner and assign Stamina Credits, a representation of perceived

Multiple sclerosis (MS) is one of the world's most common neurologic disorders. Fatigue is one of most common symptoms that persons with MS experience, having significant impact on their quality of life and limiting their activity levels. Self-management strategies are used to support them in the care of their health. Mobile health (mHealth) solutions are a way to offer persons with chronic conditions tools to successfully manage their symptoms and problems. Gamification is a current trend among mHealth apps used to create engaging user experiences and is suggested to be effective for behavioral change. To be effective, mHealth solutions need to be designed to specifically meet the intended audience needs. User-centered design (UCD) is a design philosophy that proposes placing end users' needs and characteristics in the center of design and development, involving users early in the different phases of the software life cycle. There is a current gap in mHealth apps for persons with MS, which presents an interesting area to explore. The purpose of this study was to describe the design and evaluation process of a gamified mHealth solution for behavioral change in persons with MS using UCD. Building on previous work of our team where we identified needs, barriers, and facilitators for mHealth apps for persons with MS, we followed UCD to design and evaluate a mobile app prototype aimed to help persons with MS self-manage their fatigue. Design decisions were evidence-driven and guided by behavioral change models (BCM). Usability was assessed through inspection methods using Nielsen's heuristic evaluation. The mHealth solution More Stamina was designed. It is a task organization tool designed to help persons with MS manage their energy to minimize the impact of fatigue in their day-to-day life. The tool acts as a to-do list where users can input tasks in a simple manner and assign Stamina Credits, a representation of perceived effort, to the task to help energy management

.... This research project consists of two phases: Phase 1, which culminates with this report, investigated the use of multiple-criteria decision-making in the design process of lock approach walls to consider barge impact and earthquake loads...

A paper published by Maniya and Bhatt (2011) (An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems, Computers & Industrial Engineering, 61, 542-549) proposed an alternative multiple attribute decision making method named as “Preference Selection Index (PSI) method” for selection of an optimal facility layout design. The authors had claimed that the method was logical and more appropriate and the method gives directly the o...

Non-orthogonal multiple access (NOMA) is promoted as a key component of 5G cellular networks. As the name implies, NOMA operation introduces intracell interference (i.e., interference arising within the cell) to the cellular operation. The intracell interference is managed by careful NOMA design (e.g., user clustering and resource allocation) along with successive interference cancellation. However, most of the proposed NOMA designs are agnostic to intercell interference (i.e., interference from outside the cell), which is a major performance limiting parameter in 5G networks. This article sheds light on the drastic negative-impact of intercell interference on the NOMA performance and advocates interference-aware NOMA design that jointly accounts for both intracell and intercell interference. To this end, a case study for fair NOMA operation is presented and intercell interference mitigation techniques for NOMA networks are discussed. This article also investigates the potential of integrating NOMA with two important 5G transmission schemes, namely, full duplex and device-to-device communication. This is important since the ambitious performance defined by the 3rd Generation Partnership Project (3GPP) for 5G is foreseen to be realized via seamless integration of several new technologies and transmission techniques.

Full Text Available The optimal design of hydraulic excavator working device is often characterized by computationally expensive analysis methods such as finite element analysis. Significant difficulties also exist when using a sensitivity-based decomposition approach to such practical engineering problems because explicit mathematical formulas between the objective function and design variables are impossible to formulate. An effective alternative is known as the surrogate model. The purpose of this article is to provide a comparative study on multiple surrogate models, including the response surface methodology, Kriging, radial basis function, and support vector machine, and select the one that best fits the optimization of the working device. In this article, a new modeling strategy based on the combination of the dimension variables between hinge joints and the forces loaded on hinge joints of the working device is proposed. In addition, the extent to which the accuracy of the surrogate models depends on different design variables is presented. The bionic intelligent optimization algorithm is then used to obtain the optimal results, which demonstrate that the maximum stresses calculated by the predicted method and finite element analysis are quite similar, but the efficiency of the former is much higher than that of the latter.

Full Text Available An autonomous approach and landing (A&L guidance law is presented in this paper for landing an unpowered reusable launch vehicle (RLV at the designated runway touchdown. Considering the full nonlinear point-mass dynamics, a guidance scheme is developed in three-dimensional space. In order to guarantee a successful A&L movement, the multiple sliding surfaces guidance (MSSG technique is applied to derive the closed-loop guidance law, which stems from higher order sliding mode control theory and has advantage in the finite time reaching property. The global stability of the proposed guidance approach is proved by the Lyapunov-based method. The designed guidance law can generate new trajectories on-line without any specific requirement on off-line analysis except for the information on the boundary conditions of the A&L phase and instantaneous states of the RLV. Therefore, the designed guidance law is flexible enough to target different touchdown points on the runway and is capable of dealing with large initial condition errors resulted from the previous flight phase. Finally, simulation results show the effectiveness of the proposed guidance law in different scenarios.

Full Text Available The topic of practical implementation of multiple antenna systems for mobile communications has recently gained a lot of attention. Due to the area constraint on a mobile device, the problem of how to design such a system in order to achieve the best benefit is still a huge challenge. In this paper, genetic algorithm (GA is used to find the optimal antenna positions on a mobile device. Two cases of 3×3 and 4×4 MIMO systems are undertaken. The effect of mutual coupling based on Z-parameter is the main factor to determine the MIMO capacity concerning the objective function of GA search. The results confirm the success of the proposed method to design MIMO antenna positions on a mobile device. Moreover, this paper introduces the method to design the antenna positions for the condition of nondeterministic channel. The concern of channel variation has been included in the process of finding optimal MIMO antenna positions. The results suggest that the averaging position from all GA solutions according to all channel conditions provides the most acceptable benefit.

Full Text Available The analysis and design of a multiple residential building, seismically protected by a base isolation system incorporating double friction pendulum sliders as protective devices, are presented in the paper. The building, situated in the suburban area of Florence, is composed of four independent reinforced concrete framed structures, mutually separated by three thermal expansion joints. The plan is L-shaped, with dimensions of about 75 m in the longitudinal direction and about 30 m along the longest side of the transversal direction. These characteristics identify the structure as the largest example of a base-isolated “artificial ground” ever built in Italy. The base isolation solution guarantees lower costs, a much greater performance, and a finer architectural look, as compared to a conventional fixed-base antiseismic design. The characteristics of the building and the isolators, the mechanical properties and the experimental characterization campaign and preliminary sizing carried out on the latter, and the nonlinear time-history design and performance assessment analyses developed on the base isolated building are reported in this paper, along with details about the installation of the isolators and the plants and highlights of the construction works.

This paper examines the effects of connecting multiplexing shunt circuits composed of inductors and resistors to piezoelectric transducers so as to improve the robustness of a piezoelectric vibration absorber (PVA). PVAs are well known to be effective at suppressing the vibration of an adaptive structure; their weakness is low robustness to changes in the dynamic parameters of the system, including the main structure and the absorber. In the application to space structures, the temperature-dependency of capacitance of piezoelectric ceramics is the factor that causes performance reduction. To improve robustness to the temperature-dependency of the capacitance, this paper proposes a multiple-PVA system that is composed of distributed piezoelectric transducers and several shunt circuits. The optimization problems that determine both the frequencies and the damping ratios of the PVAs are multi-objective problems, which are solved using a real-coded genetic algorithm in this paper. A clamped aluminum beam with four groups of piezoelectric ceramics attached was considered in simulations and experiments. Numerical simulations revealed that the PVA systems designed using the proposed method had tolerance to changes in the capacitances. Furthermore, experiments using a thermostatic bath were conducted to reveal the effectiveness and robustness of the PVA systems. The maximum peaks of the transfer functions of the beam with the open circuit, the single-PVA system, the double-PVA system, and the quadruple-PVA system at 20 °C were 14.3 dB, −6.91 dB, −7.47 dB, and −8.51 dB, respectively. The experimental results also showed that the multiple-PVA system is more robust than a single PVA in a variable temperature environment from −10 °C to 50 °C. In conclusion, the use of multiple PVAs results in an effective, robust vibration control method for adaptive structures. (paper)

Hydraulic dampers are used to decrease the vibration of a vehicle, where vibration energy is dissipated as heat. In addition to resulting in energy waste, the damping coefficient in hydraulic dampers cannot be changed during operation. In this paper, an energy-harvesting vehicle damper was proposed to replace traditional hydraulic dampers. The goal is not only to recover kinetic energy from suspension vibration but also to change the damping coefficient during operation according to road conditions. The energy-harvesting damper consists of multiple generators that are independently controlled by switches. One of these generators connects to a tunable resistor for fine tuning the damping coefficient, while the other generators are connected to a control and rectifying circuit, each of which both regenerates electricity and provides a constant damping coefficient. A mathematical model was built to investigate the performance of the energy-harvesting damper. By controlling the number of switched-on generators and adjusting the value of the external tunable resistor, the damping can be fine tuned according to the requirement. In addition to the capability of damping tuning, the multiple controlled generators can output a significant amount of electricity. A prototype was built to test the energy-harvesting damper design. Experiments on an MTS testing system were conducted, with results that validated the theoretical analysis. Experiments show that changing the number of switched-on generators can obviously tune the damping coefficient of the damper and simultaneously produce considerable electricity.

Full Text Available The objective of this work is to present a new design of a Flexible Hardware Interface (FHI based on PID control techniques to use in a virtual laboratory. This flexible hardware interface allows the easy implementation of different and multiple remote electronic practical experiments for undergraduate engineering classes. This interface can be viewed as opened hardware architecture to easily develop simple or complex remote experiments in the electronic domain. The philosophy of the use of this interface can also be expanded to many other domains as optic experiments for instance. It is also demonstrated that software can be developed to enable remote measurements of electronic circuits or systems using only Web site Interface. Using standard browsers (such as Internet explorer, Firefox, Chrome or Safari, different students can have a remote access to different practical experiments at a time.

-pass-filter (HPF) structure. It generates the power reference according to the fluctuating power and provides a stabilization effect. The power and energy supplied by ESS are majorly configured by the cut-off frequency and gain of the HPF. Considering the operational limits on ESS state-of-charge (SoC), this paper...... proposes an adaptive cut-off frequency design method to realize communication-less and autonomous operation of a system with multiple distributed ESS. The experimental results demonstrate that the SoCs of all ESS units are kept within safe margins, while the SoC level and power of the paralleled units...... converge to the final state, providing a natural plug-and-play function....

Handling of multiple simultaneous faults is a complex issue in fault-tolerant control. The design task is particularly made difficult by to the numerous different cases that need be analyzed. Aiming at safe fault-handling, this paper shows how structural analysis can be applied to find...... to structural analysis to disclose which faults could be isolated from a structural point of view using active fault isolation. Results from application on a marine control system illustrate the concepts....... the analytical redundancy relations for all relevant combinations of faults, and can cope with the complexity and size of a real system. Being essential for fault-tolerant control schemes that shall handle particular cases of faults/failures, fault isolation is addressed. The paper introduces an extension...

A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves

Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplicationdesigns are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.

Technologies like electronic health records or telemedicine devices support the rapid mediation of health information and clinical data independent of time and location between patients and their physicians as well as among health care professionals. Today, every part of the treatment process from diagnosis, treatment selection, and application to patient education and long-term care may be enhanced by a quality-assured implementation of health information technology (HIT) that also takes data security standards and concerns into account. In order to increase the level of effectively realized benefits of eHealth services, a user-driven needs assessment should ensure the inclusion of health care professional perspectives into the process of technology development as we did in the development process of the Multiple Sclerosis Documentation System 3D. After analyzing the use of information technology by patients suffering from multiple sclerosis, we focused on the needs of neurological health care professionals and their handling of health information technology. Therefore, we researched the status quo of eHealth adoption in neurological practices and clinics as well as health care professional opinions about potential benefits and requirements of eHealth services in the field of multiple sclerosis. We conducted a paper-and-pencil-based mail survey in 2013 by sending our questionnaire to 600 randomly chosen neurological practices in Germany. The questionnaire consisted of 24 items covering characteristics of participating neurological practices (4 items), the current use of network technology and the Internet in such neurological practices (5 items), physicians' attitudes toward the general and MS-related usefulness of eHealth systems (8 items) and toward the clinical documentation via electronic health records (4 items), and physicians' knowledge about the Multiple Sclerosis Documentation System (3 items). From 600 mailed surveys, 74 completed surveys were returned

The barrier potential design criteria in multiple-quantum-well (MQW)-based solar-cell structures is reported for the purpose of achieving maximum efficiency. The time-dependent short-circuit current density at the collector side of various MQW solar-cell structures under resonant condition was numerically calculated using the time-dependent Schroedinger equation. The energy efficiency of solar cells based on the InAs/Ga(y)In(1-y)As and GaAs/Al(x)Ga(1-x)As MQW structues were compared when carriers are excited at a particular solar-energy band. Using InAs/Ga(y)In(1-y)As MQW structures it is found that a maximum energy efficiency can be achieved if the structure is designed with barrier potential of about 450 meV. The efficiency is found to decline linearly as the barrier potential increases for GaAs/Al(x)Ga(1-x)As MQW-structure-based solar cells.

This paper proposes a method to identify opportunities for increasing the efficiency of raw material allocation decisions for products that are simultaneously targeted at multiple user populations around the world. The values of 24 body measures at certain key percentiles were used to estimate the best-fitting anthropometric distributions for female and male adults in nine national populations, which were selected to represent the diverse target markets multinational companies must design for. These distributions were then used to synthesize body measure data for combined populations with a 1:1 female:male ratio. An anthropometric range metric (ARM) was proposed for assessing the variation of these body measures across the populations. At any percentile, ARM values were calculated as the percentage difference between the highest and lowest anthropometric values across the considered user populations. Based on their magnitudes, plots of ARM values computed between the 1st and 99 th percentiles for each body measure were grouped into low, medium, and high categories. This classification of body measures was proposed as a means of selecting the most suitable strategies for designing raw material-efficient products. The findings in this study and the contributions of subsequent work along these lines are expected to help achieve greater efficiencies in resource allocation in global product development.

In a mining complex, the mine is a source of supply of valuable material (ore) to a number of processes that convert the raw ore to a saleable product or a metal concentrate for production of the refined metal. In this context, expected variation in metal content throughout the extent of the orebody defines the inherent uncertainty in the supply of ore, which impacts the subsequent ore and metal production targets. Traditional optimization methods for designing production phases and ultimate pit limit of an open pit mine not only ignore the uncertainty in metal content, but, in addition, commonly assume that the mine delivers ore to a single processing facility. A stochastic network flow approach is proposed that jointly integrates uncertainty in supply of ore and multiple ore destinations into the development of production phase design and ultimate pit limit. An application at a copper mine demonstrates the intricacies of the new approach. The case study shows a 14% higher discounted cash flow when compared to the traditional approach.

We report the implementation of molecular modeling approaches developed as a part of the 2016 Grand Challenge 2, the blinded competition of computer aided drug design technologies held by the D3R Drug Design Data Resource (https://drugdesigndata.org/). The challenge was focused on the ligands of the farnesoid X receptor (FXR), a highly flexible nuclear receptor of the cholesterol derivative chenodeoxycholic acid. FXR is considered an important therapeutic target for metabolic, inflammatory, bowel and obesity related diseases (Expert Opin Drug Metab Toxicol 4:523-532, 2015), but in the context of this competition it is also interesting due to the significant ligand-induced conformational changes displayed by the protein. To deal with these conformational changes we employed multiple simulations of molecular dynamics (MD). Our MD-based protocols were top-ranked in estimating the free energy of binding of the ligands and FXR protein. Our approach was ranked second in the prediction of the binding poses where we also combined MD with molecular docking and artificial neural networks. Our approach showed mediocre results for high-throughput scoring of interactions.

Multiple-Aperture Optical Telescopes (MAOTs) are a promising solution for very high resolution imaging. In the Michelson configuration, the instrument is made of sub-telescopes distributed in the pupil and combined by a common telescope via folding periscopes. The phasing conditions of the sub-pupils lead to specific optical constraints in these subsystems. The amplitude of main contributors to the wavefront error (WFE) is given as a function of high level requirements (such as field or resolution) and free parameters, mainly the sub-telescope type, magnification and diameter. It is shown that for the periscopes, the field-to-resolution ratio is the main design driver and can lead to severe specifications. The effect of sub-telescopes aberrations on the global WFE can be minimized by reducing their diameter. An analytical tool for the MAOT design has been derived from this analysis, illustrated and validated in three different cases: LEO or GEO Earth observation and astronomy with extremely large telescopes. The last two cases show that a field larger than 10 000 resolution elements can be covered with a very simple MAOT based on Mersenne paraboloid-paraboloid sub-telescopes. Michelson MAOTs are thus a solution to be considered for high resolution wide-field imaging, from space or ground.

.14 and 23.99% (± 15.40, respectively. Comparison of the variances of qualitative and quantitative indices indicated significant differences between difficulty index, differ¬entiation index, reliability of the total test and percentage of taxonomy II among faculties (P_value<0.001, but this differences were not observed in taxonomies I and III. The results of Tukey multiple comparison test revealed a statistically significantly increase in the reliability of the medical faculty tests (P_value=0.001 and a statistically significantly decrease in the difficulty index of paramedi¬cal faculty tests compared to other faculties (P_value =0.041. Due to the lower differentiation index, the per¬centage of taxonomies II and III and the percentage of the questions with no structural problems compared to the standard criterion in some faculties, it is necessary to provide qualitative and quantitative feedback for the faculty members, as mentioned in previous studies (5 to promote their knowledge in designing the multiple-choice questions as an assessing tool of students.

X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future

One of the fundamental operations in sensor networks is convergecast which refers to the communication pattern in which data is collected from a set of sensor nodes and forwarded to a common end-point gateway, namely sink node, in the network. In case of multiple sinks within the network, the total

One of the fundamental operations in sensor networks is convergecast which refers to the communication pattern in which data is collected from a set of sensor nodes and forwarded to a common end-point gateway, namely sink node, in the network. In case of multiple sinks within the network, the total

Video streaming is gaining importance, with the wide popularity of multimedia rich applications in the Internet. Video streams are delay sensitive and require seamless flow for continuous visualization. Properly designed buffers offer a solution to queuing delay. The diagonally opposite QoS metrics associated with video traffic poses an optimization problem, in the design of buffers. This paper is a continuation of our previous work [1] and deals with the design of buffers. It aims at finding...

Highlights: • Thermodynamic model of a solar-dish Stirling engine was presented. • Thermal efficiency and output power of the engine were simultaneously maximized. • A final optimal solution was selected using several decision-making methods. • An optimal solution with least deviation from the ideal design was obtained. • Optimal solutions showed high sensitivity against variation of system parameters. - Abstract: A solar-powered high temperature differential Stirling engine was considered for optimization using multiple criteria. A thermal model was developed so that the output power and thermal efficiency of the solar Stirling system with finite rate of heat transfer, regenerative heat loss, conductive thermal bridging loss, finite regeneration process time and imperfect performance of the dish collector could be obtained. The output power and overall thermal efficiency were considered for simultaneous maximization. Multi-objective evolutionary algorithms (MOEAs) based on the NSGA-II algorithm were employed while the solar absorber temperature and the highest and lowest temperatures of the working fluid were considered the decision variables. The Pareto optimal frontier was obtained and a final optimal solution was also selected using various decision-making methods including the fuzzy Bellman–Zadeh, LINMAP and TOPSIS. It was found that multi-objective optimization could yield results with a relatively low deviation from the ideal solution in comparison to the conventional single objective approach. Furthermore, it was shown that, if the weight of thermal efficiency as one of the objective functions is considered to be greater than weight of the power objective, lower absorber temperature and low temperature ratio should be considered in the design of the Stirling engine

The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.

Full Text Available Abstract Background The cohort multiple randomised controlled trial (cmRCT design provides an opportunity to incorporate the benefits of randomisation within clinical practice; thus reducing costs, integrating electronic healthcare records, and improving external validity. This study aims to address a key concern of the cmRCT design: refusal to treatment is only present in the intervention arm, and this may lead to bias and reduce statistical power. Methods We used simulation studies to assess the effect of this refusal, both random and related to event risk, on bias of the effect estimator and statistical power. A series of simulations were undertaken that represent a cmRCT trial with time-to-event endpoint. Intention-to-treat (ITT, per protocol (PP, and instrumental variable (IV analysis methods, two stage predictor substitution and two stage residual inclusion, were compared for various refusal scenarios. Results We found the IV methods provide a less biased estimator for the causal effect when refusal is present in the intervention arm, with the two stage residual inclusion method performing best with regards to minimum bias and sufficient power. We demonstrate that sample sizes should be adapted based on expected and actual refusal rates in order to be sufficiently powered for IV analysis. Conclusion We recommend running both an IV and ITT analyses in an individually randomised cmRCT as it is expected that the effect size of interest, or the effect we would observe in clinical practice, would lie somewhere between that estimated with ITT and IV analyses. The optimum (in terms of bias and power instrumental variable method was the two stage residual inclusion method. We recommend using adaptive power calculations, updating them as refusal rates are collected in the trial recruitment phase in order to be sufficiently powered for IV analysis.

In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.

In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.

Full Text Available The major focus of this work is to examine the dynamics of velocity amplification through pair-wise collisions between multiple masses in a chain, in order to develop useful machines. For instance low-cost machines based on this principle could be used for detailed, very-high acceleration shock-testing of MEMS devices. A theoretical basis for determining the number and mass of intermediate stages in such a velocity amplifier, based on simple rigid body mechanics, is proposed. The influence of mass ratios and the coefficient of restitution on the optimisation of the system is identified and investigated. In particular, two cases are examined: in the first, the velocity of the final mass in the chain (that would have the object under test mounted on it is maximised by defining the ratio of adjacent masses according to a power law relationship; in the second, the energy transfer efficiency of the system is maximised by choosing the mass ratios such that all masses except the final mass come to rest following impact. Comparisons are drawn between both cases and the results are used in proposing design guidelines for optimal shock amplifiers. It is shown that for most practical systems, a shock amplifier with mass ratios based on a power law relationship is optimal and can easily yield velocity amplifications of a factor 5–8 times. A prototype shock testing machine that was made using above principles is briefly introduced.

This research aims to design e-learning resources with multiple representations based on a contextual approach for the Basic Physics Course. The research uses the research and development methods accordance Dick & Carey strategy. The development carried out in the digital laboratory of Physics Education Department, Mathematics and Science Faculty, Universitas Negeri Jakarta. The result of the process of product development with Dick & Carey strategy, have produced e-learning design of the Basic Physics Course is presented in multiple representations in contextual learning syntax. The appropriate of representation used in the design of learning basic physics include: concept map, video, figures, data tables of experiment results, charts of data tables, the verbal explanations, mathematical equations, problem and solutions example, and exercise. Multiple representations are presented in the form of contextual learning by stages: relating, experiencing, applying, transferring, and cooperating.

Control of multiple autonomous aircraft for search and exploration, is a topic of current research interest for applications such as weather monitoring, geographical surveys, search and rescue, tactical reconnaissance, and extra-terrestrial exploration, and the need to distribute sensing is driven by considerations of efficiency, reliability, cost and scalability. Hence, this problem has been extensively studied in the fields of controls and artificial intelligence. The task of persistent surveillance is different from a coverage/exploration problem, in that all areas need to be continuously searched, minimizing the time between visitations to each region in the target space. This distinction does not allow a straightforward application of most exploration techniques to the problem, although ideas from these methods can still be used. The use of aerial vehicles is motivated by their ability to cover larger spaces and their relative insensitivity to terrain. However, the dynamics of Unmanned Air Vehicles (UAVs) adds complexity to the control problem. Most of the work in the literature decouples the vehicle dynamics and control policies, but their interaction is particularly interesting for a surveillance mission. Stochastic environments and UAV failures further enrich the problem by requiring the control policies to be robust, and this aspect is particularly important for hardware implementations. For a persistent mission, it becomes imperative to consider the range/endurance constraints of the vehicles. The coupling of the control policy with the endurance constraints of the vehicles is an aspect that has not been sufficiently explored. Design of UAVs for desirable mission performance is also an issue of considerable significance. The use of a single monolithic optimization for such a problem has practical limitations, and decomposition-based design is a potential alternative. In this research high-level control policies are devised, that are scalable, reliable

Incorporating the concept of TCP end-to-end congestion control for wireless networks is one of the primary concerns in designing ad hoc networks since TCP was primarily designed and optimized based on the assumptions for wired networks. In this study, our interest lies on tackling the TCP instability and in particular intra-flow instability problem since due to the nature of applications in multihop ad hoc networks, connection instability or starvation even for a short period of time can have...

Full Text Available Video streaming is gaining importance, with the wide popularity of multimedia rich applications in the Internet. Video streams are delay sensitive and require seamless flow for continuous visualization. Properly designed buffers offer a solution to queuing delay. The diagonally opposite QoS metrics associated with video traffic poses an optimization problem, in the design of buffers. This paper is a continuation of our previous work [1] and deals with the design of buffers. It aims at finding the optimum buffer size for enhancing QoS offered to video traffic. Network-centric QoS provisioning approach, along with hybrid transport layer protocol approach is adopted, to arrive at an optimum size which is independent of RTT. In this combinational approach, buffers of routers and end devices are designed to satisfy the various QoS parameters at the transport layer. OPNET Modeler is used to simulate environments for testing the design. Based on the results of simulation it is evident that the hybrid transport layer protocol approach is best suited for transmitting video traffic as it supports the economical design.

Background For over two decades occupational therapists have been encouraged to enhance their roles within primary care and focus on health promotion and prevention activities. While there is a clear fit between occupational therapy and primary care, there have been few practice examples, despite a growing body of evidence to support the role. In 2010, the province of Ontario, Canada provided funding to include occupational therapists as members of Family Health Teams, an interprofessional model of primary care. The integration of occupational therapists into this model of primary care is one of the first large scale initiatives of its kind in North America. The objective of the study was to examine how occupational therapy services are being integrated into primary care teams and understand the structures supporting the integration. Methods A multiple case study design was used to provide an in-depth description of the integration of occupational therapy. Four Family Health Teams with occupational therapists as part of the team were identified. Data collection included in-depth interviews, document analyses, and questionnaires. Results Each Family Health Team had a unique organizational structure that contributed to the integration of occupational therapy. Communication, trust and understanding of occupational therapy were key elements in the integration of occupational therapy into Family Health Teams, and were supported by a number of strategies including co-location, electronic medical records and team meetings. An understanding of occupational therapy was critical for integration into the team and physicians were less likely to understand the occupational therapy role than other health providers. Conclusion With an increased emphasis on interprofessional primary care, new professions will be integrated into primary healthcare teams. The study found that explicit strategies and structures are required to facilitate the integration of a new professional group

Increasing student engagement through Electronic Response Systems (clickers) has been widely researched. Its success largely depends on the quality of multiple-choice questions used by instructors. This paper describes a pilot project that focused on the implementation of online collaborative multiple-choice question repository, PeerWise, in a…

Given the emergence of service-oriented architecture, IS students need to be knowledgeable of multiple server-side computer programming languages to be able to meet the needs of the job market. This paper outlines the pedagogy of an innovative course of multiple server-side computer languages for the undergraduate IS majors. The paper discusses…

High granularity photon multiplicity detector (PMD) is scheduled to take data in Relativistic Heavy Ion Collision(RHIC) this year. A detailed scheme has been designed and implemented in object oriented programming framework using C++ for the monitoring and reconstruction job of PMD data

Full Text Available This paper deals with the development of an integration framework and its implementation for the connexion of CAx systems and multiple-view product modelling. The integration framework is presented regarding its conceptual level and the implementation level is described currently with the connexion of a functional modeller, a multiple-view product modeller, an optimisation module and a CAD system. The integration between the multiple-view product modeller and CATIA V5 based on the STEP standard is described in detail. Finally, the presented works are discussed and future research developments are suggested.

This paper studies the energy efficient transmission and the power allocation problem for multiple two-way relay networks equipped with multi-input multi-output antennas where each relay employs an amplify-and-forward strategy. The goal

Full Text Available This paper investigates optimal reinsurance strategies for an insurer which cedes the insured risk to multiple reinsurers. Assume that the insurer and every reinsurer apply the coherent risk measures. Then, we find out the necessary and sufficient conditions for the reinsurance market to achieve Pareto optimum; that is, every ceded-loss function and the retention function are in the form of “multiple layers reinsurance.”

Full Text Available An approximate analytical formulation of the resource allocation problem for handling variable bit rate multiclass services in a cellular round-robin carrier-hopping multirate multicarrier direct-sequence code-division multiple-access (MC-DS-CDMA system is presented. In this paper, all grade-of-service (GoS or quality-of-service (QoS requirements at the connection level, packet level, and link layer are satisfied simultaneously in the system, instead of being satisfied at the connection level or at the link layer only. The analytical formulation shows how the GoS/QoS in the different layers are intertwined across the layers. A novelty of this paper is that the outages in the subcarriers are minimized by spreading the subcarriers' signal-to-interference ratio evenly among all the subcarriers by using a dynamic round-robin carrier-hopping allocation scheme. A complete sharing (CS scheme with guard capacity is used for the resource sharing policy at the connection level based on the mean rates of the connections. Numerical results illustrate that significant gain in the system utilization is achieved through the joint coupling of connection/packet levels and link layer.

Visible Light Communications (VLC) has become an emerging area of research since it can provide higher data transmission speed and wider bandwidth. The white LEDs are very important components of the VLC system, because it has the advantages of higher brightness, lower power consumption, and a longer lifetime. More importantly, their intensity and color are modulatable. Besides the light source, the optical antenna system also plays a very important role in the VLC system since it determines the optical gain, effective working area and transmission rate of the VLC system. In this paper, we propose to design an ultra-thin and multiple channels optical antenna system by tiling multiple off-axis lenses, each of which consists of two reflective and two refractive freeform surfaces. The tiling of multiple systems and detectors but with different band filters makes it possible to design a wavelength division multiplexing VLC system to highly improve the system capacity. The field of view of the designed antenna system is 30°, the entrance pupil diameter is 1.5mm, and the thickness of the system is under 4mm. The design methods are presented and the results are discussed in the last section of this paper. Besides the optical gain is analyzed and calculated. The antenna system can be tiled up to four channels but without the increase of thickness.

Full Text Available In Very Large Scale Integrated Circuits (VLSI design, the existing Design-for-Test(DFT based watermarking techniques usually insert watermark through reordering scan cells, which causes large resource overhead, low security and coverage rate of watermark detection. A novel scheme was proposed to watermark multiple scan chains in DFT for solving the problems. The proposed scheme adopts DFT scan test model of VLSI design, and uses a Linear Feedback Shift Register (LFSR for pseudo random test vector generation. All of the test vectors are shifted in scan input for the construction of multiple scan chains with minimum correlation. Specific registers in multiple scan chains will be changed by the watermark circuit for watermarking the design. The watermark can be effectively detected without interference with normal function of the circuit, even after the chip is packaged. The experimental results on several ISCAS benchmarks show that the proposed scheme has lower resource overhead, probability of coincidence and higher coverage rate of watermark detection by comparing with the existing methods.

A new multiple access scheme, Waveform Division Multiple Access (WDMA) based on the orthogonal wavelet function, is presented. After studying the correlation properties of different categories of single wavelet functions, the one with the best correlation property will be chosen as the foundation for combined waveform. In the communication system, each user is assigned to different combined orthogonal waveform. Demonstrated by simulation, combined waveform is more suitable than single wavelet function to be a communication medium in WDMA system. Due to the excellent orthogonality, the bit error rate (BER) of multiuser with combined waveforms is so close to that of single user in a synchronous system. That is to say, the multiple access interference (MAI) is almost eliminated. Furthermore, even in an asynchronous system without multiuser detection after matched filters, the result is still pretty ideal and satisfactory by using the third combination mode that will be mentioned in the study.

We describe the first simulations of stratospheric sulfate aerosol geoengineering using multiple injection locations to meet multiple simultaneous surface temperature objectives. Simulations were performed using CESM1(WACCM), a coupled atmosphere-ocean general circulation model with fully interactive stratospheric chemistry, dynamics (including an internally generated quasi-biennial oscillation), and a sophisticated treatment of sulfate aerosol formation, microphysical growth, and deposition. The objectives are defined as maintaining three temperature features at their 2020 levels against a background of the RCP8.5 scenario over the period 2020-2099. These objectives are met using a feedback mechanism in which the rate of sulfur dioxide injection at each of the four locations is adjusted independently every year of simulation. Even in the presence of uncertainties, nonlinearities, and variability, the objectives are met, predominantly by SO2 injection at 30°N and 30°S. By the last year of simulation, the feedback algorithm calls for a total injection rate of 51 Tg SO2 per year. The injections are not in the tropics, which results in a greater degree of linearity of the surface climate response with injection amount than has been found in many previous studies using injection at the equator. Because the objectives are defined in terms of annual mean temperature, the required geeongineering results in "overcooling" during summer and "undercooling" during winter. The hydrological cycle is also suppressed as compared to the reference values corresponding to the year 2020. The demonstration we describe in this study is an important step toward understanding what geoengineering can do and what it cannot do.

This paper demonstrates to utilize the hammer capacity by modifying the die design such that forging hammer can manufacture more than one connecting rod in a given forging cycle time. To modify the die design study is carried out to understand the parameters that are required for forging die design. By considering these parameters, forging die is designed using design modelling tool solid edge. This new design now can produce two connecting rods in same capacity hammer. The new design is required to validate by verifying complete filing of metal in die cavities without any defects in it. To verify this, analysis tool DEFORM 3D is used in this project. Before start of validation process it is require to convert 3D generated models in to. STL file format to import the models into the analysis tool DEFORM 3D. After importing these designs they are analysed for material flow into the cavities and energy required to produce two connecting rods in new forging die design. It is found that the forging die design is proper without any defects and also energy graph shows that the forging energy required to produce two connecting rods is within the limit of that hammer capacity. Implementation of this project increases the production of connecting rods by 200% in less than previous cycle time.

Satellite solar observatories have always been of central importance to heliophysics; while there have been numerous such missions, the solar poles have been extremely under-observed. This paper proposes to use low-thrust as well as multiple gravity assists to reach the enormous energies required

Full Text Available It has been shown that the multiple trellis code can perform better than the conventional trellis code over AWGN channels, at the cost of additional computations per trellis branch. Multiple trellis coded multi-h CPM schemes have been shown in the literature to have attractive power-bandwidth performance at the expense of increased receiver complexity. In this method, the multi-h format is made to be associated with the specific pattern and repeated rather than cyclically changed in time for successive symbol intervals, resulting in a longer effective length of the error event with better performance. It is well known that the rate (n-1/n multiple trellis codes combined with 2^n-level CPM have good power-bandwidth performance. In this paper, a scheme combining rate 1/2 and 2/3 multiple trellis codes with 4- and 8-level multi-h CPM is shown to have better power-bandwidth performance over the upper bound than the scheme with single-h.

Flexible pipes for production of oil and gas typically present a corrugated inner surface. This has been identified as the cause of "singing risers": Flow-Induced Pulsations due to the interaction of sound waves with the shear layers at the small cavities present at each of the multiple

Brand experience is an important concept in marketing because it can affect brand loyalty, brand recall, and brand attitude. Brand experience design is therefore an important practice for companies to create favourable and meaningful experiences, through the design of various touchpoints that are in

This paper aims at presenting a multiple objective model to evaluate the attractiveness of the use of demand resources (through load management control actions) by different stakeholders and in diverse structure scenarios in electricity systems. For the sake of model flexibility, the multiple (and conflicting) objective functions of technical, economical and quality of service nature are able to capture distinct market scenarios and operating entities that may be interested in promoting load management activities. The computation of compromise solutions is made by resorting to evolutionary algorithms, which are well suited to tackle multiobjective problems of combinatorial nature herein involving the identification and selection of control actions to be applied to groups of loads. (Author)

The system architecture and test results of the custom circuits and beam test system for the Multiplicity-Vertex Detector (MVD) for the PHENIX detector collaboration at the Relativistic Heavy Ion Collider (RHIC) are presented in this paper. The final detector per-channel signal processing chain will consist of a preamplifier-gain stage, a current-mode summed multiplicity discriminator, a 64-deep analog memory (simultaneous read-write), a post-memory analog correlator, and a 10-bit 5 μs ADC. The Heap Manager provides all timing control, data buffering, and data formatting for a single 256-channel multi-chip module (MCM). Each chip set is partitioned into 32-channel sets. Beam test (16-cell deep memory) performance for the various blocks will be presented as well as the ionizing radiation damage performance of the 1.2 μ n-well CMOS process used for preamplifier fabrication

The technique and uses of the multiple blade slurry (MBS) saw are considered. Multiple bands of steel are arranged in a frame and the frame is reciprocated with the steel bands to a workpiece, while simultaneously applying abrasive at the point of contact. The blades wear slots in the workpiece and progress through the piece resulting in several parts of wafers. The transition to MBA from diamond slicing is justified by savings resulting from minimized kerf losses, minimized subsurface damage, and improved surface quality off the saw. This allows wafering much closer to finished thickness specifications. The current state of the art MBS technology must be significantly improved if the low cost solar array (LSA) goals are to be attained. It is concluded that although MBS will never be the answer to every wafering requirement, the economical production of wafers to LSA project specifications will be achieved.

Full Text Available Investigating differences between means of more than two groups or experimental conditions is a routine research question addressed in biology. In order to assess differences statistically, multiple comparison procedures are applied. The most prominent procedures of this type, the Dunnett and Tukey-Kramer test, control the probability of reporting at least one false positive result when the data are normally distributed and when the sample sizes and variances do not differ between groups. All three assumptions are non-realistic in biological research and any violation leads to an increased number of reported false positive results. Based on a general statistical framework for simultaneous inference and robust covariance estimators we propose a new statistical multiple comparison procedure for assessing multiple means. In contrast to the Dunnett or Tukey-Kramer tests, no assumptions regarding the distribution, sample sizes or variance homogeneity are necessary. The performance of the new procedure is assessed by means of its familywise error rate and power under different distributions. The practical merits are demonstrated by a reanalysis of fatty acid phenotypes of the bacterium Bacillus simplex from the "Evolution Canyons" I and II in Israel. The simulation results show that even under severely varying variances, the procedure controls the number of false positive findings very well. Thus, the here presented procedure works well under biologically realistic scenarios of unbalanced group sizes, non-normality and heteroscedasticity.

There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

Full Text Available There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

Full Text Available on constraints programming satisfaction technology is proposed. The algorithm is tested in OPNET simulation environment using different network models derived from a hypothetical case study of an optical network design for Bellville area in Cape Town, South...

A conceptual design of the control room layout for the nuclear power plant with multiple modular high temperature gas-cooled reactors has been developed. The modular high temperature gas-cooled reactors may need to be grouped to produce as much energy as a utility demands to realize the economic efficiency. There are many differences between the multi-modular plant and the current NPPs in the control room. These differences may include the staffing level, the human-machine interface design, the operation mode, etc. The potential challenges of the human factor engineering (HFE) in the control room of the multi-modular plant are analyzed, including the operation workload of the multi-modular tasks, how to help the crew to keep situation awareness of all modules, and how to support team work, the control of shared system between modules, etc. A concept design of control room for the multi-modular plant is presented based on the design aspect of HTR-PM (High temperature gas-cooled reactor pebble bed module). HFE issues are considered in the conceptual design of control room for the multi-modular plant and some design strategies are presented. As a novel conceptual design, verifications and validations are needed, and focus of further work is sketch out. (author)

The desire to build websites to analyze and display ever increasing amounts of scientific data and images pushes for web site designs which utilize large displays, and to use the display area as efficiently as possible. Yet, scientists and users of their data are increasingly wishing to access these websites in the field and on mobile devices. This results in the need to develop websites that can support a wide range of devices and screen sizes, and to optimally use whatever display area is available. Historically, designers have addressed this issue by building two websites; one for mobile devices, and one for desktop environments, resulting in increased cost, duplicity of work, and longer development times. Recent advancements in web design technology and techniques have evolved which allow for the development of a single website that dynamically adjusts to the type of device being used to browse the website (smartphone, tablet, desktop). In addition they provide the opportunity to truly optimize whatever display area is available. HTML5 and CSS3 give web designers media query statements which allow design style sheets to be aware of the size of the display being used, and to format web content differently based upon the queried response. Web elements can be rendered in a different size, position, or even removed from the display entirely, based upon the size of the display area. Using HTML5/CSS3 media queries in this manner is referred to as "Responsive Web Design" (RWD). RWD in combination with technologies such as LESS and Twitter Bootstrap allow the web designer to build web sites which not only dynamically respond to the browser display size being used, but to do so in very controlled and intelligent ways, ensuring that good layout and graphic design principles are followed while doing so. At the University of Alaska Fairbanks, the Alaska Satellite Facility SAR Data Center (ASF) recently redesigned their popular Vertex application and converted it from a

Full Text Available We take a cross-layer optimization approach to study energy efficient data transport in coalition-based wireless sensor networks, where neighboring nodes are organized into groups to form coalitions and sensor nodes within one coalition carry out cooperative communications. In particular, we investigate two network models: (1 many-to-one sensor networks where data from one coalition are transmitted to the sink directly, and (2 multihop sensor networks where data are transported by intermediate nodes to reach the sink. For the many-to-one network model, we propose three schemes for data transmission from a coalition to the sink. In scheme 1, one node in the coalition is selected randomly to transmit the data; in scheme 2, the node with the best channel condition in the coalition transmits the data; and in scheme 3, all the nodes in the coalition transmit in a cooperative manner. Next, we investigate energy balancing with cooperative data transport in multihop sensor networks. Built on the above coalition-aided data transmission schemes, the optimal coalition planning is then carried out in multihop networks, in the sense that unequal coalition sizes are applied to minimize the difference of energy consumption among sensor nodes. Numerical analysis reveals that energy efficiency can be improved significantly by the coalition-aided transmission schemes, and that energy balancing across the sensor nodes can be achieved with the proposed coalition structures.

In this paper, we investigate the effect of haptic cueing on a human operator's performance in the field of bilateral teleoperation of multiple mobile robots, particularly multiple unmanned aerial vehicles (UAVs). Two aspects of human performance are deemed important in this area, namely, the maneuverability of mobile robots and the perceptual sensitivity of the remote environment. We introduce metrics that allow us to address these aspects in two psychophysical studies, which are reported here. Three fundamental haptic cue types were evaluated. The Force cue conveys information on the proximity of the commanded trajectory to obstacles in the remote environment. The Velocity cue represents the mismatch between the commanded and actual velocities of the UAVs and can implicitly provide a rich amount of information regarding the actual behavior of the UAVs. Finally, the Velocity+Force cue is a linear combination of the two. Our experimental results show that, while maneuverability is best supported by the Force cue feedback, perceptual sensitivity is best served by the Velocity cue feedback. In addition, we show that large gains in the haptic feedbacks do not always guarantee an enhancement in the teleoperator's performance.

In this paper Freed is presented, a system that enables design students to spatially organize their digital collection, define relations between collection content and reflect on it. The system features a force-based layout that allows to explore spatial organizations, and hence to gain new

the cell), which is a major performance limiting parameter in 5G networks. This article sheds light on the drastic negative-impact of intercell interference on the NOMA performance and advocates interference-aware NOMA design that jointly accounts for both

The purpose of this study was to examine the effectiveness of universally designed (UD) instruction on strategic learning in an online, interactive learning environment (ILE). The research focused on the premise that the customizable, media-based framework of UD instruction might influence diverse online learning strategies. This study…

to design quality soft sensors for cement kiln processes using data collected from a simulator and a plant log system. Preliminary results reveal that the WPLS approach is able to provide accurate one-step-ahead prediction. The regularized data lifting technique predicts the product quality of cement kiln...

In this paper we design a simultaneous three bunch length operating mode at the HLS-II (Hefei Light Source II) storage ring by installing two harmonic cavities and minimizing the momentum compaction factor. The short bunches (2.6 mm) presented in this work will meet the requirement of coherent millimeter-wave and sub-THz radiation experiments, while the long bunches (20 mm) will efficiently increase the total beam current. Therefore, this multiple-bunch-length operating mode allows present synchrotron users and coherent millimeter-wave users (or sub THz users) to carry out their experiments simultaneously. Since the relatively low energy characteristic of HLS-II we achieve the multiple-bunch-length operating mode without multicell superconducting RF cavities, which is technically feasible.

The FASTSAT-HSV01 spacecraft is a microsatellite with magnetic torque rods as it sole attitude control actuator. FASTSAT s multiple payloads and mission functions require the Attitude Control System (ACS) to maintain Local Vertical Local Horizontal (LVLH)-referenced attitudes without spin-stabilization, while the pointing errors for some attitudes be significantly smaller than the previous best-demonstrated for this type of control system. The mission requires the ACS to hold multiple stable, unstable, and non-equilibrium attitudes, as well as eject a 3U CubeSat from an onboard P-POD and recover from the ensuing tumble. This paper describes the Attitude Control System, the reasons for design choices, how the ACS integrates with the rest of the spacecraft, and gives recommendations for potential future applications of the work.

The Advanced Test Reactor (ATR) is a high power density test reactor specializing in fuel and materials irradiation. For more than 45 years, the ATR has provided irradiations of materials and fuels testing along with radioisotope production. Should unforeseen circumstances lead to the decommissioning of ATR, the U.S. Government would be left without a large-scale materials irradiation capability to meet the needs of its nuclear energy and naval reactor missions. In anticipation of this possibility, work was performed under the Laboratory Directed Research and Development (LDRD) program to investigate test reactor concepts that could satisfy the current missions of the ATR along with an expanded set of secondary missions. A survey was conducted in order to catalogue the anticipated needs of potential customers. Then, concepts were evaluated to fill the role for this reactor, dubbed the Multi-Application Thermal Reactor Irradiation eXperiments (MATRIX). The baseline MATRIX design is expected to be capable of longer cycle lengths than ATR given a particular batch scheme. The volume of test space in In-Pile-Tubes (IPTs) is larger in MATRIX than in ATR with comparable magnitude of neutron flux. Furthermore, MATRIX has more locations of greater volume having high fast neutron flux than ATR. From the analyses performed in this work, it appears that the lead MATRIX design can be designed to meet the anticipated needs of the ATR replacement reactor. However, this design is quite immature, and therefore any requirements currently met must be re-evaluated as the design is developed further.

One important aspect of physics instruction is helping students develop better problem solving expertise. Besides enhancing the content knowledge, problems help students develop different cognitive abilities and skills. This presentation focuses on multiple-possibility problems (alternatively called ill-structured problems). These problems are different from traditional ``end of chapter'' single-possibility problems. They do not have one right answer and thus the student has to examine different possibilities, assumptions and evaluate the outcomes. To solve such problems one has to engage in a cognitive monitoring called epistemic cognition. It is an important part of thinking in real life. Physicists routinely use epistemic cognition when they solve problems. I have explored the instructional value of using such problems in introductory physics courses.

This paper studies the energy efficient transmission and the power allocation problem for multiple two-way relay networks equipped with multi-input multi-output antennas where each relay employs an amplify-and-forward strategy. The goal is to minimize the total power consumption without degrading the quality of service of the terminals. In our analysis, we start by deriving closed-form expressions of the optimal powers allocated to terminals. We then employ a strong optimization tool based on the particle swarm optimization technique to find the optimal power allocated at each relay antenna. Our numerical results illustrate the performance of the proposed scheme and show that it achieves a sub-optimal solution very close to the optimal one.

Full Text Available This paper reports about a novel three-dimensional chaotic system with three nonlinearities. The system has one stable equilibrium, two stable equilibria and one saddle node, two saddle foci and one saddle node for different parameters. One salient feature of this novel system is its multiple attractors caused by different initial values. With the change of parameters, the system experiences mono-stability, bi-stability, mono-periodicity, bi-periodicity, one strange attractor, and two coexisting strange attractors. The complex dynamic behaviors of the system are revealed by analyzing the corresponding equilibria and using the numerical simulation method. In addition, an electronic circuit is given for implementing the chaotic attractors of the system. Using the new chaotic system, an S-Box is developed for cryptographic operations. Moreover, we test the performance of this produced S-Box and compare it to the existing S-Box studies.

Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

We describe the first simulations of stratospheric sulfate aerosol geoengineering using multiple injection locations to meet multiple simultaneous surface temperature objectives. Simulations were performed using CESM1(WACCM), a coupled atmosphere-ocean general circulation model with fully interactive stratospheric chemistry, dynamics (including an internally generated quasi-biennial oscillation), and a sophisticated treatment of sulfate aerosol formation, microphysical growth, and deposition. The objectives are defined as maintaining three temperature features at their 2020 levels against a background of the RCP8.5 scenario over the period 2020-2099. These objectives are met using a feedback mechanism in which the rate of sulfur dioxide injection at each of the four locations is adjusted independently every year of simulation. Even in the presence of uncertainties, nonlinearities, and variability, the objectives are met, predominantly by SO2 injection at 30°N and 30°S. By the last year of simulation, the feedback algorithm calls for a total injection rate of 51 Tg SO2 per year. The injections are not in the tropics, which results in a greater degree of linearity of the surface climate response with injection amount than has been found in many previous studies using injection at the equator. Because the objectives are defined in terms of annual mean temperature, the required geongineering results in "overcooling" during summer and "undercooling" during winter. The hydrological cycle is also suppressed as compared to the reference values corresponding to the year 2020. The demonstration we describe in this study is an important step toward understanding what geoengineering can do and what it cannot do.

Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

Full Text Available A Field Programmable Gate Array based system is a great hardware platform to support the implementation of hardware controllers such as PID controller and fuzzy controller. It is also programmed as hardware accelerator to speed up the mathematic calculation and greatly enhance the performance as applied to motor drive and motion control. Furthermore, the open structure of FPGA-based system is suitable for those designs with the ability of parallel processing or soft code processor embedded. In this paper, we apply the FPGA to a multi-axis velocity controller design. The developed system integrated three functions inside the FPGA chip, which are respectively the stepping motor drive, the multi-axis motion controller and the motion planning. Furthermore, an embedded controller with a soft code processor compatible to 8051 micro-control unit (MCU is built to handle the data transfer between the FPGA board and host PC. The MCU is also used to initialize the motion control and run the interpolator. The designed system is practically applied to a XYZ motion platform which is driven by stepping motors to verify its performance.

Trials which test the effectiveness of interventions compared with the status quo frequently encounter challenges. The cohort multiple randomised controlled trial (cmRCT) design is an innovative approach to the design and conduct of pragmatic trials which seeks to address some of these challenges. In this article, we report our experiences with the first completed randomised controlled trial (RCT) using the cmRCT design. This trial-the Depression in South Yorkshire (DEPSY) trial-involved comparison of treatment as usual (TAU) with TAU plus the offer of an intervention for people with self-reported long-term moderate to severe depression. In the trial, we used an existing large population-based cohort: the Yorkshire Health Study. We discuss our experiences with recruitment, attrition, crossover, data analysis, generalisability of results, and cost. The main challenges in using the cmRCT design were the high crossover to the control group and the lower questionnaire response rate among patients who refused the offer of treatment. However, the design did help facilitate efficient and complete recruitment of the trial population as well as analysable data that were generalisable to the population of interest. Attrition rates were also smaller than those reported in other depression trials. This first completed full trial using the cmRCT design testing an intervention for self-reported depression was associated with a number of important benefits. Further research is required to compare the acceptability and cost effectiveness of standard pragmatic RCT design with the cmRCT design. ISRCTN registry: ISRCTN02484593 . Registered on 7 Jan 2013.

Full Text Available This paper presents a general description of local flexibility markets as a market-based management mechanism for aggregators. The high penetration of distributed energy resources introduces new flexibility services like prosumer or community self-balancing, congestion management and time-of-use optimization. This work is focused on the flexibility framework to enable multiple participants to compete for selling or buying flexibility. In this framework, the aggregator acts as a local market operator and supervises flexibility transactions of the local energy community. Local market participation is voluntary. Potential flexibility stakeholders are the distribution system operator, the balance responsible party and end-users themselves. Flexibility is sold by means of loads, generators, storage units and electric vehicles. Finally, this paper presents needed interactions between all local market stakeholders, the corresponding inputs and outputs of local market operation algorithms from participants and a case study to highlight the application of the local flexibility market in three scenarios. The local market framework could postpone grid upgrades, reduce energy costs and increase distribution grids’ hosting capacity.

For the design of a MEMS accelerometer, proper performance indices should be defined and employed. Performance indices are obtained using either an experimental method or a numerical method. In the present study, a vibration analysis model of a MEMS accelerometer is introduced to calculate three performance indices: sensitivity, measurable acceleration range, and measurable frequency range. The accuracy of the vibration analysis model is first validated by comparing its modal and transient results with those of a commercial finite element code. Measurable acceleration and frequency ranges versus allowable errors for electrical and mechanical sensitivities are obtained and the effects of system parameter variations on the three performance indices are investigated

We present the architecture and code design for a highly scalable, 2.5 Gb/s per user optical code division multiple access (OCDMA) system. The system is scalable to 100 potential and more than 10 simultaneous users, each with a bit error rate (BER) of less than 10 -9 . The system architecture uses a fast wavelength-hopping, time-spreading codes. Unlike frequency and phase sensitive coherent OCDMA systems, this architecture utilizes standard on off keyed optical pulses allocated in the time and wavelength dimensions. This incoherent OCDMA approach is compatible with existing WDM optical networks and utilizes off the shelf components. We discuss the novel optical subsystem design for encoders and decoders that enable the realization of a highly scalable incoherent OCDMA system with rapid reconfigurability. A detailed analysis of the scalability of the two dimensional code is presented and select network deployment architectures for OCDMA are discussed (Authors)

A simple and general approach for designing practical all-optical (all-fiber) arbitrary-order time differentiators is introduced here for the first time. Specifically, we demonstrate that the Nth time derivative of an input optical waveform can be obtained by reflection of this waveform in a single uniform fiber Bragg grating (FBG) incorporating N &pi-phase shifts properly located along its grating profile. The general design procedure of an arbitrary-order optical time differentiator based on a multiple-phase-shifted FBG is described and numerically demonstrated for up to fourth-order time differentiation. Our simulations show that the proposed approach can provide optical operation bandwidths in the tens-of-GHz regime using readily feasible FBG structures.

The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

Full Text Available The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

Full Text Available The paper presents a rubric to help evaluate the quality of research projects. The rubric was applied in a competition across a variety of disciplines during a two-day research symposium at one institution in the southwest region of the United States of America. It was collaboratively designed by a faculty committee at the institution and was administered to 204 undergraduate, master, and doctoral oral presentations by approximately 167 different evaluators. No training or norming of the rubric was given to 147 of the evaluators prior to the competition. The findings of the inter-rater reliability analysis reveal substantial agreement among the judges, which contradicts literature describing the fact that formal norming must occur prior to seeing substantial levels of inter-rater reliability. By presenting the rubric along with the methodology used in its design and evaluation, it is hoped that others will find this to be a useful tool for evaluating documents and for teaching research methods.

Residents, developers and civic officials are often faced with difficult decisions about appropriate land uses in and around metropolitan boundaries. Urban expansion brings with it the potential for negative environmental impacts, but there are alternatives, such as conservation subdivision design (CSD) or low-impact development (LID), which offer the possibility of mitigating some of these effects at the development site. Many urban planning jurisdictions across the Midwest do not currently have any examples of these designs and lack information to identify public support or barriers to use of these methods. This is a case study examining consumer value for conservation and low-impact design features in one housing market by using four different valuation techniques to estimate residents' willingness to pay for CSD and LID features in residential subdivisions. A contingent valuation survey of 1804 residents in Ames, IA assessed familiarity with and perceptions of subdivision development and used an ordered value approach to estimate willingness to pay for CSD and LID features. A majority of residents were not familiar with CSD or LID practices. Residents indicated a willingness to pay for most CSD and LID features with the exception of clustered housing. Gender, age, income, familiarity with LID practices, perceptions of attractiveness of features and the perceived effect of CSD and LID features on ease of future home sales were important factors influencing residents' willingness to pay. A hypothetical referendum measured willingness to pay for tax-funded conservation land purchases and estimated that a property tax of around $50 would be the maximum increase that would pass. Twenty-seven survey respondents participated in a subsequent series of experimental real estate negotiations that used an experimental auction mechanism to estimate willingness to pay for CSD and LID features. Participants indicated that clustered housing (with interspersed preserved forest

A fundamental challenge for PET block detector designs is to deploy finer crystal elements while limiting the number of readout channels. The standard Anger-logic scheme including light sharing (an 8 by 8 crystal array coupled to a 2×2 photodetector array with an optical diffuser, multiplexing ratio: 16:1) has been widely used to address such a challenge. Our work proposes a generalized model to study the impacts of two critical parameters on spatial resolution performance of a PET block detector: multiple interaction events and signal-to-noise ratio (SNR). The study consists of the following three parts: (1) studying light output profile and multiple interactions of 511 keV photons within crystal arrays of different crystal widths (from 4 mm down to 1 mm, constant height: 20 mm); (2) applying the Anger-logic positioning algorithm to investigate positioning/decoding uncertainties (i.e., "block effect") in terms of peak-to-valley ratio (PVR), with light sharing, multiple interactions and photodetector SNR taken into account; and (3) studying the dependency of spatial resolution on SNR in the context of modulation transfer function (MTF). The proposed model can be used to guide the development and evaluation of a standard Anger-logic based PET block detector including: (1) selecting/optimizing the configuration of crystal elements for a given photodetector SNR; and (2) predicting to what extent additional electronic multiplexing may be implemented to further reduce the number of readout channels.

The engineering of Kerr interactions is of great interest for processing quantum information in multipartite quantum systems and for investigating many-body physics in a complex cavity-qubit network. We study how coupling multiple different types of superconducting qubits to the same cavity modes can be used to modify the self- and cross-Kerr effects acting on the cavities and demonstrate that this type of architecture could be of significant benefit for quantum technologies. Using both analytical perturbation theory results and numerical simulations, we first show that coupling two superconducting qubits with opposite anharmonicities to a single cavity enables the effective self-Kerr interaction to be diminished, while retaining the number splitting effect that enables control and measurement of the cavity field. We demonstrate that this reduction of the self-Kerr effect can maintain the fidelity of coherent states and generalised Schrödinger cat states for much longer than typical coherence times in realistic devices. Next, we find that the cross-Kerr interaction between two cavities can be modified by coupling them both to the same pair of qubit devices. When one of the qubits is tunable in frequency, the strength of entangling interactions between the cavities can be varied on demand, forming the basis for logic operations on the two modes. Finally, we discuss the feasibility of producing an array of cavities and qubits where intermediary and on-site qubits can tune the strength of self- and cross-Kerr interactions across the whole system. This architecture could provide a way to engineer interesting many-body Hamiltonians and be a useful platform for quantum simulation in circuit quantum electrodynamics.

HTR50S is a small modular reactor system based on HTGR. It is designed for a triad of applications to be implemented in successive stages. In the first stage, a base plant for heat and power is constructed of the fuel proven in JAEA's 950 .deg. C, 30MWt test reactor HTTR and a conventional steam turbine to minimize development risk. While the outlet temperature is lowered to 750 .deg. C for the steam turbine, thermal power is raised to 50MWt by enabling 40% greater power density in 20% taller core than the HTTR. However the fuel temperature limit and reactor pressure vessel diameter are kept. In second stage, a new fuel that is currently under development at JAEA will allow the core outlet temperature to be raised to 900 .deg. C for the purpose of demonstrating more efficient gas turbine power generation and high temperature heat supply. The third stage adds a demonstration of nuclear-heated hydrogen production by a thermochemical process. A licensing approach to coupling high temperature industrial process to nuclear reactor will be developed. The low initial risk and the high longer-term potential for performance expansion attract development of the HTR50S as a multipurpose industrial or distributed energy source.

This study proposes the replacement of all the physical devices used in the manufacturing of conventional prostheses through the use of digital tools, such as 3D scanners, CAD design software, 3D implants files, rapid prototyping machines or reverse engineering software, in order to develop laboratory work models from which to finish coatings for dental prostheses. Different types of dental prosthetic structures are used, which were adjusted by a non-rotatory threaded fixing system. From a digital process, the relative positions of dental implants, soft tissue and adjacent teeth of edentulous or partially edentulous patients has been captured, and a maser working model which accurately replicates data relating to the patients oral cavity has been through treatment of three-dimensional digital data. Compared with the conventional master cast, the results show a significant cost savings in attachments, as well as an increase in the quality of reproduction and accuracy of the master cast, with the consequent reduction in the number of patient consultation visits. The combination of software and hardware three-dimensional tools allows the optimization of the planning of dental implant-supported rehabilitations protocol, improving the predictability of clinical treatments and the production cost savings of master casts for restorations upon implants.

Full Text Available This study proposes the replacement of all the physical devices used in the manufacturing of conventional prostheses through the use of digital tools, such as 3D scanners, CAD design software, 3D implants files, rapid prototyping machines or reverse engineering software, in order to develop laboratory work models from which to finish coatings for dental prostheses. Different types of dental prosthetic structures are used, which were adjusted by a non-rotatory threaded fixing system.From a digital process, the relative positions of dental implants, soft tissue and adjacent teeth of edentulous or partially edentulous patients has been captured, and a maser working model which accurately replicates data relating to the patients oral cavity has been through treatment of three-dimensional digital data.Compared with the conventional master cast, the results show a significant cost savings in attachments, as well as an increase in the quality of reproduction and accuracy of the master cast, with the consequent reduction in the number of patient consultation visits. The combination of software and hardware three-dimensional tools allows the optimization of the planning of dental implant-supported rehabilitations protocol, improving the predictability of clinical treatments and the production cost savings of master casts for restorations upon implants.

Soft robots are made of soft materials and have good flexibility and infinite degrees of freedom in theory. These properties enable soft robots to work in narrow space and adapt to external environment. In this paper, a 2-DOF soft pneumatic actuator is introduced, with two chambers symmetrically distributed on both sides and a jamming cylinder along the axis. Fibers are used to constrain the expansion of the soft actuator. Experiments are carried out to test the performance of the soft actuator, including bending and elongation characteristics. A soft robot is designed and fabricated by connecting four soft pneumatic actuators to a 3D-printed board. The soft robotic system is then established. The pneumatic circuit is built by pumps and solenoid valves. The control system is based on the control board Arduino Mega 2560. Relay modules are used to control valves and pressure sensors are used to measure pressure in the pneumatic circuit. Experiments are conducted to test the performance of the proposed soft robot.

Aim This study proposes the replacement of all the physical devices used in the manufacturing of conventional prostheses through the use of digital tools, such as 3D scanners, CAD design software, 3D implants files, rapid prototyping machines or reverse engineering software, in order to develop laboratory work models from which to finish coatings for dental prostheses. Different types of dental prosthetic structures are used, which were adjusted by a non-rotatory threaded fixing system. Method From a digital process, the relative positions of dental implants, soft tissue and adjacent teeth of edentulous or partially edentulous patients has been captured, and a maser working model which accurately replicates data relating to the patients oral cavity has been through treatment of three-dimensional digital data. Results Compared with the conventional master cast, the results show a significant cost savings in attachments, as well as an increase in the quality of reproduction and accuracy of the master cast, with the consequent reduction in the number of patient consultation visits. The combination of software and hardware three-dimensional tools allows the optimization of the planning of dental implant-supported rehabilitations protocol, improving the predictability of clinical treatments and the production cost savings of master casts for restorations upon implants. PMID:26696528

Full Text Available Urban drainage systems that incorporate elements of green infrastructure (SuDS/GI are central features in Blue-Green and Sponge Cities. Such approaches provide effective control of stormwater management whilst generating a range of other benefits. However these benefits often occur coincidentally and are not developed or maximised in the original design. Of all the benefits that may accrue, the relevant dominant benefits relating to specific locations and socio-environmental circumstances need to be established, so that flood management functions can be co-designed with these wider benefits to ensure both are achieved during system operation. The paper reviews a number of tools which can evaluate the multiple benefits of SuDS/GI interventions in a variety of ways and introduces new concepts of benefit intensity and benefit profile. Examples of how these concepts can be applied is provided in a case study of proposed SuDS/GI assets in the central area of Newcastle; UK. Ways in which SuDS/GI features can be actively extended to develop desired relevant dominant benefits are discussed; e.g., by (i careful consideration of tree and vegetation planting to trap air pollution; (ii extending linear SuDS systems such as swales to enhance urban connectivity of green space; and (iii managing green roofs for the effective attenuation of noise or carbon sequestration. The paper concludes that more pro-active development of multiple benefits is possible through careful co-design to achieve the full extent of urban enhancement SuDS/GI schemes can offer.

A novel hydrogel having hydrophobic oligo segments and hydrophilic poly(acrylamidoglycolic acid) (PAGA) as pH responsive polymer segments was designed and synthesized to be used as a soft biomaterial. Poly(trimethylene carbonate) (PTMC) as the side chain, for which the degrees of polymerization were 9, 19, and 49, and the composition ratios were 1, 5, and 10 mol%, was used as the oligo segment in the hydrogel. The swelling ratio of the hydrogel was investigated under various changes in conditions such as pH, temperature, and hydrogen bonding upon urea addition. Under pH 2–11 conditions, the graft gel reversibly swelled and shrank due to the effect of PAGA main chain. The interior morphology and skin layer of the hydrogel was observed by a scanning electron microscope. The hydrogel composed of PAGA as the hydrophilic polymer backbone had a sponge-like structure, with a pore size of approximately 100 μm. On the other hand, upon increasing the ratio of trimethylene carbonate (TMC) units in the hydrogel, the pores became smaller or disappeared. Moreover, thickness of the skin layer significantly increased with the swelling ratio depended on the incorporation ratios of the PTMC macromonomer. Molecular incorporation in the hydrogel was evaluated using a dye as a model drug molecule. These features would play an important role in drug loading. Increasing the ratio of TMC units favored the adsorption of the dye and activation of the incorporation behavior. - Highlights: • Hydrogen bonding and hydrophobic interaction are dominant factor for forming hydrogels. • Hydrogel properties were tuned by changing in graft length and macromonomer content in feed. • The resulting graft gel could encapsulate and retain organic dye in the hydrogel. • Poly(trimethylene carbonate) segment in the hydrogel was dominant unit for hydrogel.

A novel hydrogel having hydrophobic oligo segments and hydrophilic poly(acrylamidoglycolic acid) (PAGA) as pH responsive polymer segments was designed and synthesized to be used as a soft biomaterial. Poly(trimethylene carbonate) (PTMC) as the side chain, for which the degrees of polymerization were 9, 19, and 49, and the composition ratios were 1, 5, and 10 mol%, was used as the oligo segment in the hydrogel. The swelling ratio of the hydrogel was investigated under various changes in conditions such as pH, temperature, and hydrogen bonding upon urea addition. Under pH 2–11 conditions, the graft gel reversibly swelled and shrank due to the effect of PAGA main chain. The interior morphology and skin layer of the hydrogel was observed by a scanning electron microscope. The hydrogel composed of PAGA as the hydrophilic polymer backbone had a sponge-like structure, with a pore size of approximately 100 μm. On the other hand, upon increasing the ratio of trimethylene carbonate (TMC) units in the hydrogel, the pores became smaller or disappeared. Moreover, thickness of the skin layer significantly increased with the swelling ratio depended on the incorporation ratios of the PTMC macromonomer. Molecular incorporation in the hydrogel was evaluated using a dye as a model drug molecule. These features would play an important role in drug loading. Increasing the ratio of TMC units favored the adsorption of the dye and activation of the incorporation behavior. - Highlights: • Hydrogen bonding and hydrophobic interaction are dominant factor for forming hydrogels. • Hydrogel properties were tuned by changing in graft length and macromonomer content in feed. • The resulting graft gel could encapsulate and retain organic dye in the hydrogel. • Poly(trimethylene carbonate) segment in the hydrogel was dominant unit for hydrogel.

Solid state NMR (SSNMR) experiments on heteronuclei in natural abundance are described for three synthetically designed tripeptides Piv-(L)Pro-(L)Pro-(L)Phe-OMe (1), Piv-(D)Pro-(L)Pro-(L)Phe-OMe (2), and Piv-(D)Pro-(L)Pro-(L)Phe-NHMe (3). These peptides exist in different conformation as shown by solution state NMR and single crystal X-ray analysis (Chatterjee et al., Chem Eur J 2008, 14, 6192). In this study, SSNMR has been used to probe the conformations of these peptides in their powder form. The (13)C spectrum of peptide (1) showed doubling of resonances corresponding to cis/cis form, unlike in solution where the similar doubling is attributed to cis/trans form. This has been confirmed by the chemical shift differences of C(beta) and C(gamma) carbon of Proline in peptide (1) both in solution and SSNMR. Peptide (2) and (3) provided single set of resonances which represented all trans form across the di-Proline segment. The results are in agreement with the X-ray analysis. Solid state (15)N resonances, especially from Proline residues provided additional information, which is normally not observable in solution state NMR. (1)H chemical shifts are also obtained from a two-dimensional heteronuclear correlation experiment between (1)H--(13)C. The results confirm the utility of NMR as a useful tool for identifying different conformers in peptides in the solid state. (c) 2009 Wiley Periodicals, Inc. Biopolymers 91: 851-860, 2009.

An enhanced mechanical design of multiple zone plates precision alignment apparatus for hard x-ray focusing in a twenty-nanometer scale is provided. The precision alignment apparatus includes a zone plate alignment base frame; a plurality of zone plates; and a plurality of zone plate holders, each said zone plate holder for mounting and aligning a respective zone plate for hard x-ray focusing. At least one respective positioning stage drives and positions each respective zone plate holder. Each respective positioning stage is mounted on the zone plate alignment base frame. A respective linkage component connects each respective positioning stage and the respective zone plate holder. The zone plate alignment base frame, each zone plate holder and each linkage component is formed of a selected material for providing thermal expansion stability and positioning stability for the precision alignment apparatus.

In exact analogy with their electronic counterparts, photonic temporal integrators are fundamental building blocks for constructing all-optical circuits for ultrafast information processing and computing. In this work, we introduce a simple and general approach for realizing all-optical arbitrary-order temporal integrators. We demonstrate that the N(th) cumulative time integral of the complex field envelope of an input optical waveform can be obtained by simply propagating this waveform through a single uniform fiber/waveguide Bragg grating (BG) incorporating N pi-phase shifts along its axial profile. We derive here the design specifications of photonic integrators based on multiple-phase-shifted BGs. We show that the phase shifts in the BG structure can be arbitrarily located along the grating length provided that each uniform grating section (sections separated by the phase shifts) is sufficiently long so that its associated peak reflectivity reaches nearly 100%. The resulting designs are demonstrated by numerical simulations assuming all-fiber implementations. Our simulations show that the proposed approach can provide optical operation bandwidths in the tens-of-GHz regime using readily feasible photo-induced fiber BG structures.

Full Text Available Due to its simple mechanical structure and high motion stability, the H-shaped platform has been increasingly widely used in precision measuring, numerical control machining and semiconductor packaging equipment, etc. The H-shaped platform is normally driven by multiple (three permanent magnet synchronous linear motors. The main challenges for H-shaped platform-control include synchronous control between the two linear motors in the Y direction as well as total positioning error of the platform mover, a combination of position deviation in X and Y directions. To deal with the above challenges, this paper proposes a control strategy based on the inverse system method through state feedback and dynamic decoupling of the thrust force. First, mechanical dynamics equations have been deduced through the analysis of system coupling based on the platform structure. Second, the mathematical model of the linear motors and the relevant coordinate transformation between dq-axis currents and ABC-phase currents are analyzed. Third, after the main concept of inverse system method being explained, the inverse system model of the platform control system has been designed after defining relevant system variables. Inverse system model compensates the original nonlinear coupled system into pseudo-linear decoupled linear system, for which typical linear control methods, like PID, can be adopted to control the system. The simulation model of the control system is built in MATLAB/Simulink and the simulation result shows that the designed control system has both small synchronous deviation and small total trajectory tracking error. Furthermore, the control program has been run on NI controller for both fixed-loop-time and free-loop-time modes, and the test result shows that the average loop computation time needed is rather small, which makes it suitable for real industrial applications. Overall, it proves that the proposed new control strategy can be used in

Application of land surface/hydrologic models within an operational flood forecasting system can provide probable time of occurrence and magnitude of streamflow at specific locations along a stream. Creating time-varying spatial extent of flood inundation and depth requires the use of a hydraulic or hydrodynamic model. Models differ in representing river geometry and surface roughness which can lead to different output depending on the particular model being used. The result from a single hydraulic model provides just one possible realization of the flood extent without capturing the uncertainty associated with the input or the model parameters. The objective of this study is to compare multiple hydraulic models toward generating ensemble flood inundation extents. Specifically, relative performances of four hydraulic models, including AutoRoute, HEC-RAS, HEC-RAS 2D, and LISFLOOD are evaluated under different geophysical conditions in several locations across the United States. By using streamflow output from the same hydrologic model (SWAT in this case), hydraulic simulations are conducted for three configurations: (i) hindcasting mode by using past observed weather data at daily time scale in which models are being calibrated against USGS streamflow observations, (ii) validation mode using near real-time weather data at sub-daily time scale, and (iii) design mode with extreme streamflow data having specific return periods. Model generated inundation maps for observed flood events both from hindcasting and validation modes are compared with remotely sensed images, whereas the design mode outcomes are compared with corresponding FEMA generated flood hazard maps. The comparisons presented here will give insights on probable model-specific nature of biases and their relative advantages/disadvantages as components of an operational flood forecasting system.

Full Text Available A periodic leaky-wave antenna (LWA design based on low loss substrate-integrated waveguide (SIW technology with inset half-wave microstrip antennas is presented. The developed LWA operates in the V-band between 50 and 70 GHz and has been fabricated using standard printed circuit board (PCB technology. The presented LWA is highly functional and very compact supporting 1D beam steering and multibeam operation with only a single radio frequency (RF feeding port. Within the operational 50–70 GHz bandwidth, the LWA scans through broadside, providing over 40° H-plane beam steering. When operated within the 57–66 GHz band, the maximum steering angle is 18.2°. The maximum gain of the fabricated LWAs is 15.4 dBi with only a small gain variation of +/−1.5 dB across the operational bandwidth. The beam steering and multibeam capability of the fabricated LWA is further utilized to support mobile users in a 60 GHz hot-spot. For a single user, a maximum wireless on-off keying (OOK data rate of 2.5 Gbit/s is demonstrated. Multibeam operation is achieved using the LWA in combination with multiple dense wavelength division multiplexing (WDM channels and remote optical heterodyning. Experimentally, multibeam operation supporting three users within a 57–66 GHz hot-spot with a total wireless cell capacity of 3 Gbit/s is achieved.

The recent development of semiconductor technology and wide spread use of power electronic devices in power system have open the era of the power system harmonics due to increasing penetration of non-linear loads. Harmonics are widely admitted as most important issues of power quality which must be eliminated to maintain power system reliability. The tolerable THD (Total Harmonic Distortion) values must be bounded in well-defined limits recognized by IEEE-519 standard. In this work, in order to eliminate the current harmonics produced by non-linear loads, six pulse multiplication converter technique in conjunction with STSSHPE (Single Tuned Shunt Harmonic Passive Filter) is proposed. The proposed model has the capacity of harmonic cancellation of the dominant 3rd order harmonics. Besides that, the 5th and 7th order harmonics are also reduced to a diminishing level. The hardware model has been experimentally tested by PQA (Power Quality Analyzer) and simulation model is designed using MATLAB software. The acquired results have been measured by considering THD values in terms of current and voltage. Furthermore, they have been compared against IEEE-519 performance standards. The prosed model, successfully bounds the total harmonic distortion under defined limits by IEEE-519 standard. (author)

fossil fuels to biofuels. In many ways biomass is a unique renewable resource. It can be stored and transported relatively easily in contrast to renewable options such as wind and solar, which create intermittent electrical power that requires immediate consumption and a connection to the grid. This thesis presents two different models for the design optimization of a biomass-to-biorefinery logistics system through bio-inspired metaheuristic optimization considering multiple types of feedstocks. This work compares the performance and solutions obtained by two types of metaheuristic approaches; genetic algorithm and ant colony optimization. Compared to rigorous mathematical optimization methods or iterative algorithms, metaheuristics do not guarantee that a global optimal solution can be found on some class of problems. Problems with similar characteristics to the one presented in this thesis have been previously solved using linear programming, integer programming and mixed integer programming methods. However, depending on the type of problem, these mathematical or complete methods might need exponential computation time in the worst-case. This often leads to computation times too high for practical purposes. Therefore, this thesis develops two types of metaheuristic approaches for the design optimization of a biomass-to-biorefinery logistics system considering multiple types of feedstocks and shows that metaheuristics are highly suitable to solve hard combinatorial optimization problems such as the one addressed in this research work.

Battery/Ultra-capacitor based electrical vehicles (EV) combine two energy sources with different voltage levels and current characteristics. This paper focuses on design and control of a multiple input DC/DC converter, to regulate output voltage from different inputs. The proposed multi-input con......Battery/Ultra-capacitor based electrical vehicles (EV) combine two energy sources with different voltage levels and current characteristics. This paper focuses on design and control of a multiple input DC/DC converter, to regulate output voltage from different inputs. The proposed multi...

MRI has been used to identify multiple sclerosis (MS) lesions in brain and spinal cord visually. Integrating patient information into an electronic patient record system has become key for modern patient care in medicine in recent years. Clinically, it is also necessary to track patients' progress in longitudinal studies, in order to provide comprehensive understanding of disease progression and response to treatment. As the amount of required data increases, there exists a need for an efficient systematic solution to store and analyze MS patient data, disease profiles, and disease tracking for both clinical and research purposes. An imaging informatics based system, called MS eFolder, has been developed as an integrated patient record system for data storage and analysis of MS patients. The eFolder system, with a DICOM-based database, includes a module for lesion contouring by radiologists, a MS lesion quantification tool to quantify MS lesion volume in 3D, brain parenchyma fraction analysis, and provide quantitative analysis and tracking of volume changes in longitudinal studies. Patient data, including MR images, have been collected retrospectively at University of Southern California Medical Center (USC) and Los Angeles County Hospital (LAC). The MS eFolder utilizes web-based components, such as browser-based graphical user interface (GUI) and web-based database. The eFolder database stores patient clinical data (demographics, MS disease history, family history, etc.), MR imaging-related data found in DICOM headers, and lesion quantification results. Lesion quantification results are derived from radiologists' contours on brain MRI studies and quantified into 3-dimensional volumes and locations. Quantified results of white matter lesions are integrated into a structured report based on DICOM-SR protocol and templates. The user interface displays patient clinical information, original MR images, and viewing structured reports of quantified results. The GUI also

A seawater desalinating still was devised, which utilizes solar heat and combines a basin type still with a multiple effect type still. The devised still has a triangular cross section, has a seawater basin at its bottom, wherein the slope facing the sun has a double glass window, and the other slope is arranged with a multiple effect still (three-step effect system). The sun light that has permeated the double glass window heats and evaporates seawater on the basin. Majority of the generated steam is condensed at the lower face of a lowermost partition of the multiple effect section, and its latent heat evaporates the seawater contained in a wick on the rear side. The steam is condensed at the lower face of the second partition, this action is repeated sequentially to upper plates, and the heat is discharged finally into the surrounding air from the uppermost face of the multiple effect section. Seawater is supplied from the upper part onto the wick in each partition, which is evaporated, condensed and recovered at the lower part. The condensate is recovered at the lower part of each plate. The construction is simpler than the conventional downward indirect heating multiple effect type still, and the distilling efficiency is improved by 30%. Technological difficulty in the upward direct heating multiple effect type still can also be improved. 6 refs., 6 figs., 2 tabs.

During the past two decades, the popularity of computer and video games has prompted games to become a source of study for educational researchers and instructional designers investigating how various aspects of game design might be appropriated, borrowed, and re-purposed for the design of educational materials. The purpose of this paper is to…

Full Text Available The aim of this study is to present a comprehensive comparison and assessment of the damping function improvement of power system oscillation for the multiple damping controllers using the simultaneously coordinated design based on Power System Stabilizer (PSS and Flexible AC Transmission System (FACTS devices. FACTS devices can help in the enhancing the stability of the power system by adding supplementary damping controller to the control channel of the FACTS input to implement the task of Power Oscillation Damping (FACT POD controller. Simultaneous coordination can be performed in different ways. First, the dual coordinated designs between PSS and FACTS POD controller or between different FACTS POD controllers are arranged in a multiple FACTS devices without PSS. Second, the simultaneous coordination has been extended to triple coordinated design among PSS and different FACTS POD controllers. The parameters of the damping controllers have been tuned in the individual controllers and coordinated designs by using a Chaotic Particle Swarm Optimization (CPSO algorithm that optimized the given eigenvalue-based objective function. The simulation results for a multi-machine power system show that the dual coordinated design provide satisfactory damping performance over the individual control responses. Furthermore, the triple coordinated design has been shown to be more effective in damping oscillations than the dual damping controllers.

Due to its simple mechanical structure and high motion stability, the H-shaped platform has been increasingly widely used in precision measuring, numerical control machining and semiconductor packaging equipment, etc. The H-shaped platform is normally driven by multiple (three) permanent magnet

A "humanitarian market" for off-grid renewable energy technologies for displaced populations in remote areas has emerged. Within this market, there are multiple stakeholder agendas. End-user needs and sustainable development goals are currently not considered through the customer-enterprise relationship and the applied product and…

The possibilities of designing a multiple steam generator for a 1250 MW(th) High Temperature Gas-Cooled Reactor, consisting of 18 units which are able to pass through 5 ft diam. holes in the integral prestressed concrete pressure vessel are investigated. A lay-out and design with bundles of multi-start helical tubes is evolved, particular attention being paid to the questions of tube blanking and removal of the unit, and of selection of materials for superheater and reheater tubes. Thermal and stress calculations have been carried out, using the Waagner-Biro Computer Code ADURHELIX. (author)

A system and method of using one or more DC-DC/DC-AC converters and/or alternative devices allows strings of multiple module technologies to coexist within the same PV power plant. A computing (optimization) framework estimates the percentage allocation of PV power plant capacity to selected PV module technologies. The framework and its supporting components considers irradiation, temperature, spectral profiles, cost and other practical constraints to achieve the lowest levelized cost of electricity, maximum output and minimum system cost. The system and method can function using any device enabling distributed maximum power point tracking at the module, string or combiner level.

This paper presents a method to design Spiking Central Pattern Generators (SCPGs) to achieve locomotion at different frequencies on legged robots. It is validated through embedding its designs into a Field-Programmable Gate Array (FPGA) and implemented on a real hexapod robot. The SCPGs are automatically designed by means of a Christiansen Grammar Evolution (CGE)-based methodology. The CGE performs a solution for the configuration (synaptic weights and connections) for each neuron in the SCPG. This is carried out through the indirect representation of candidate solutions that evolve to replicate a specific spike train according to a locomotion pattern (gait) by measuring the similarity between the spike trains and the SPIKE distance to lead the search to a correct configuration. By using this evolutionary approach, several SCPG design specifications can be explicitly added into the SPIKE distance-based fitness function, such as looking for Spiking Neural Networks (SNNs) with minimal connectivity or a Central Pattern Generator (CPG) able to generate different locomotion gaits only by changing the initial input stimuli. The SCPG designs have been successfully implemented on a Spartan 6 FPGA board and a real time validation on a 12 Degrees Of Freedom (DOFs) hexapod robot is presented.

Full Text Available An algorithm for designing spreading sequences for an overloaded multicellular synchronous DS-CDMA system on uplink is introduced. The criterion used to measure the optimality of the design is the total weighted square correlation (TWSC assuming the channel state information known perfectly at both transmitter and receiver. By using this algorithm it is possible to obtain orthogonal generalized WBE sequences sets for any processing gain. The bandwidth of initial generalized WBE signals of each cell is preserved in the extended signal space associated to multicellular system. Mathematical formalism is illustrated by selected numerical examples.

Full Text Available The design of buildings may include a comparison of alternative architectural and structural solutions. They can be developed at different levels of design process. The alternative design solutions are compared and ranked by applying methods of multiple-criteria decision-making (MCDM. Each design is characterised by a number of criteria used in a MCDM problem. The paper discusses how to choose MCDM criteria expressing fire safety related to alternative designs. Probability of a successful evacuation of occupants from a building fire and difference between evacuation time and time to untenable conditions are suggested as the most important criteria related to fire safety. These two criteria are treated as uncertain quantities expressed by probability distributions. Monte Carlo simulation of fire and evacuation processes is natural means for an estimation of these distributions. The presence of uncertain criteria requires applying stochastic MCDM methods for ranking alternative designs. An application of the safety-related criteria is illustrated by an example which analyses three alternative architectural floor plans prepared for a reconstruction of a medical building. A MCDM method based on stochastic simulation is used to solve the example problem.

The purpose of this study is to provide a comprehensive account on case-based instructional practices. Semester-long participant observation records in torts, marketing, and online instructional design classes, instructor interviews, course syllabi and teaching materials were used to describe the within-class complexity of the practices in terms…

This study examined the utility of fluoxetine in the treatment of 5 children, aged 5 to 14 years, diagnosed with selective mutism who also demonstrated symptoms of social anxiety. A nonconcurrent, randomized, multiple-baseline, single-case design with a single-blind placebo-controlled procedure was used. Parents and the study psychiatrist completed multiple methods of assessment including Direct Behavior Ratings and questionnaires. Treatment outcomes were evaluated by calculating effect sizes for each participant as an individual and for the participants as a group. Information regarding adverse effects with an emphasis on behavioral disinhibition and ratings of parental acceptance of the intervention was gathered. All 5 children experienced improvement in social anxiety, responsive speech, and spontaneous speech with medium to large effect sizes; however, children still met criteria for selective mutism at the end of the study. Adverse events were minimal, with only 2 children experiencing brief occurrences of minor behavioral disinhibition. Parents found the treatment highly acceptable.

Full Text Available RNA interference (RNAi is a post-transcriptional gene silencing mechanism that mediates the sequence-specific degradation of targeted RNA and thus provides a tremendous opportunity for development of oligonucleotide-based drugs. Here, we report on the design and validation of small interfering RNAs (siRNAs targeting highly conserved regions of the hepatitis C virus (HCV genome. To aim for therapeutic applications by optimizing the RNAi efficacy and reducing potential side effects, we considered different factors such as target RNA variations, thermodynamics and accessibility of the siRNA and target RNA, and off-target effects. This aim was achieved using an in silico design and selection protocol complemented by an automated MysiRNA-Designer pipeline. The protocol included the design and filtration of siRNAs targeting highly conserved and accessible regions within the HCV internal ribosome entry site, and adjacent core sequences of the viral genome with high-ranking efficacy scores. Off-target analysis excluded siRNAs with potential binding to human mRNAs. Under this strict selection process, two siRNAs (HCV353 and HCV258 were selected based on their predicted high specificity and potency. These siRNAs were tested for antiviral efficacy in HCV genotype 1 and 2 replicon cell lines. Both in silico-designed siRNAs efficiently inhibited HCV RNA replication, even at low concentrations and for short exposure times (24h; they also exceeded the antiviral potencies of reference siRNAs targeting HCV. Furthermore, HCV353 and HCV258 siRNAs also inhibited replication of patient-derived HCV genotype 4 isolates in infected Huh-7 cells. Prolonged treatment of HCV replicon cells with HCV353 did not result in the appearance of escape mutant viruses. Taken together, these results reveal the accuracy and strength of our integrated siRNA design and selection protocols. These protocols could be used to design highly potent and specific RNAi-based therapeutic

3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

Full Text Available This study aimed at investigating the progress of students’ learning onmultiplication fractions with natural numbers through the five activitylevels based on Realistic Mathematics Education (RME approachproposed by Streefland. Design research was chosen to achieve thisresearch goal. In design research, the Hypothetical Learning Trajectory(HLT plays important role as a design and research instrument. ThisHLT tested to thirty-seven students of grade five primary school (i.e.SDN 179 Palembang.The result of the classroom practices showed that measurement (lengthactivity could stimulate students’ to produce fractions as the first levelin learning multiplication of fractions with natural numbers.Furthermore, strategies and tools used by the students in partitioninggradually be developed into a more formal mathematics in whichnumber line be used as the model of measuring situation and the modelfor more formal reasoning. The number line then could bring thestudents to the last activity level, namely on the way to rules formultiplying fractions with natural numbers. Based on this findings, it is suggested that Streefland’s five activity levels can be used as aguideline in learning multiplication of fractions with natural numbers in which the learning process become a more progressive learning.

For optical systems consisting of metal (in general freeform) mirrors there exist several diamond turning fabrication approaches. These are distuingished by the effort in manufacturing and integration of the later system. The more work one puts into the manufacturing stage the less complicated is the alignment and integration afterwards. For example the most degrees of freedom have to be aligned in integration phase if every mirror of the system is fabricated as a single optical component. For a three mirror anastigmat with three freeform mirrors the degrees of freedom sum up to 18. Therefore the mirror fabrication itself is more or less easy, but the integration is very difficult. There are three major parts in the design and manufacturing process chain to be considered for tackling this integration problem. At the first position in the process chain there is the optical design occuring. At this stage a negotiation between manufacturing and design could improve manufacturability because of more possible integration approaches. The second stage is the mechanical design. Here the appropriate manufacturing approach is already chosen, but may be revisited due to incompatiblities with, e.g., stress specifications. The third level is the manufacturing stage. Here are different clamping approaches and fabrication methods possible. The current article will focus on an approach ("snap-together") where two mirrors are fabricated on one substrate and therefore a reduction of the number of degrees of freedom to be aligned are reduced to six. This improves the amount of time needed for the system integration significantly in contrast to a single mirror fabrication.

This paper presents a new approach to optical Code Division Multiple Access (CDMA) network transmission scheme using alternated amplitude sequences and energy differentiation at the transmitters to allow concurrent and secure transmission of several signals. The proposed system uses error control encoding and soft-decision demodulation to reduce the multi-user interference at the receivers. The design of the proposed alternated amplitude sequences, the OCDMA energy modulators and the soft decision, single-user demodulators are also presented. Simulation results show that the proposed scheme allows achieving spectral efficiencies higher than several reported results for optical CDMA and much higher than the Gaussian CDMA capacity limit.

This publication is unique in its demystification and operationalization of the complex and elusive nature of the design process. The publication portrays the designer’s daily work and the creative process, which the designer is a part of. Apart from displaying the designer’s work methods...... and design parameters, the publication shows examples from renowned Danish design firms. Through these examples the reader gets an insight into the designer’s reality....

The aim of this paper was to analyze Frequency Shift Keying (FSK Transceiver built using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW and to measure the reduction in data errors in the presence of Forward Error Correction (FEC channel coding algorithms namely the Convolution and the Turbo Codes. Through this design a graphical representation of Bit Error Rate (BER vs Eb/N0 where (Eb is Energy per bit and (N0 is Spectral noise density has been given in the presence of Additive White Gaussian Noise (AWGN introduced in the channel. FSK is widely used for data transmission over band pass channels; hence, we have chosen FSK for the implementation of SDR. The SDR transceiver module designed has been fully implemented and has the ability to navigate over a wide range of frequencies with programmable channel bandwidth and modulation characteristics. We are able to build an interactive FSK based SDR transceiver in a shorter time with the use of LabVIEW. The outputs achieved show a low BER for very high data rates in the presence of AWGN noise.

Measurements of the transverse impedances of the SLIA prototype cell were performed using the bead pull technique. These measurements were compared to a computer model (BBUS) of the prototype cell. The R/Q's measured are reasonably close to the computer model in most cases. The R can be reduced to below the design limit of 30 ohm/cm if the mode Q's can be damped to the range of less than about 23 and 70, for the main modes. With the use of the ring and cone dampers, Q's less than 17 and 48, respectively, should be achieved. Thus, the BBU problem for the difficult off-axis shielded gap geometry of the SLIA accelerator should be within design tolerances, even for 150 gaps. In particular, the 18 cm anode insertion choice should have transverse impedances of about 22 and 21 ohms/cm, respectively, for the main modes. For comparison, the ATA accelerator at Livermore has about a 12 ohm/cm transverse impedance on a 6.7 cm pipe, which roughly scaled to the SLIA 4.5 cm pipe would be equivalent to 27 ohm/cm

waves. This enables a comparison of the performance of the wave power extraction methods according to PTO requirements. The framework also allows comparing performance of fundamentally different PTOs. The idea of reactive control for increasing power absorption dates back to the 1970’s, and today its...... techniques. The research leads to three potential PTO systems, where one is a magnetic gear based PTO. The gear is based on implementing the function of a screw and nut magnetically by placing permanent magnets in a helical pattern. A PTO layout with the magnetic lead screw is found and analysed using...... simulations. The feasibility leads to having a group of master students designing a working prototype at a scale of 17kN with a half meter stroke. The magnetic lead screw is able to directly convert a linear motion of 0.5m/s to a rotational motion above 1000rpm, driving a conventional generator. Two other...

Full Text Available UAV (unmanned Aerial Vehicle platforms represent a challenging opportunity for the deployment of a number of remote sensors. These vehicles are a cost-effective option in front of manned aerial vehicles (planes and helicopters, are easy to deploy due to the short runways needed, and they allow users to meet the critical requirements of the spatial and temporal resolutions imposed by the instruments. L-band radiometers are an interesting option for obtaining soil moisture maps over local areas with relatively high spatial resolution for precision agriculture, coastal monitoring, estimation of the risk of fires, flood prevention, etc. This paper presents the design of a light-weight, airborne L-band radiometer for deployment in a small UAV, including the hardware and specific software developed for calibration, geo-referencing, and soil moisture retrieval. First results and soil moisture retrievals from different field experiments are presented.

The European competition rules restrict governments' opportunity to differentiate terms of energy accessibility among firms and industries. This easily runs counter with regional and industrial goals of national energy policies. Norway levies a tax on use of electricity, but exempts main industrial usages. This analysis assesses alternative, internationally legal, designs of the system in terms of their effects on efficiency and distribution, including industrial objectives. Among the reforms we explore, removing the exemptions would be the most effective way of raising revenue, but it would be politically costly by deteriorating the competitiveness of today's favoured industries. An entire abolishment of the electricity tax, and replacing revenue by increased VAT, would generate a more equal distribution of standard of living and, at the same time, avoid the trade-off between efficiency and competitiveness

The instrumentation problems associated with the measurement of soil moisture with a meaningful spatial and temperature resolution at a global scale are addressed. For this goal only medium term available affordable technology will be considered. The study while limited in scope, will utilize a large scale antenna structure, which is being developed presently as an experimental model. The interface constraints presented by a singel Space Transportation System (STS) flight will be assumed. Methodology consists of the following steps: review of science requirements; analyze effects of these requirements; present basic system engineering considerations and trade-offs related to orbit parameters, number of spacecraft and their lifetime, observation angles, beamwidth, crossover and swath, coverage percentage, beam quality and resolution, instrument quantities, and integration time; bracket the key system characteristics and develop an electromagnetic design of the antenna-passive radiometer system. Several aperture division combinations and feed array concepts are investigated to achieve maximum feasible performacne within the stated STS constraints.

Full Text Available Auxotrophic markers are useful tools in cloning and genome editing, enable a large spectrum of genetic techniques, as well as facilitate the study of metabolite exchange interactions in microbial communities. If unused background auxotrophies are left uncomplemented however, yeast cells need to be grown in nutrient supplemented or rich growth media compositions, which precludes the analysis of biosynthetic metabolism, and which leads to a profound impact on physiology and gene expression. Here we present a series of 23 centromeric plasmids designed to restore prototrophy in typical Saccharomyces cerevisiae laboratory strains. The 23 single-copy plasmids complement for deficiencies in HIS3, LEU2, URA3, MET17 or LYS2 genes and in their combinations, to match the auxotrophic background of the popular functional-genomic yeast libraries that are based on the S288c strain. The plasmids are further suitable for designing self-establishing metabolically cooperating (SeMeCo communities, and possess a uniform multiple cloning site to exploit multiple parallel selection markers in protein expression experiments.

In Turkey, the understanding of planning focused on timber production has given its place on Multiple Use Management (MUM). Because the whole infrastructure of forestry with inventory system leading the way depends on timber production, some cases of bottle neck are expected during the transition period. Database design, probably the most important stage during the transition to MUM, together with the digital basic maps making up the basis of this infrastructure constitute the main point of this article. Firstly, the forest management philosophy of Turkey in the past was shortly touched upon in the article. Ecosystem Based Multiple Use Forest Management (EBMUFM) approaches was briefly introduced. The second stage of the process of EBMUFM, database design was described by examining the classical planning infrastructure and the coverage to be produced and consumed were suggested in the form of lists. At the application stage, two different geographical databases were established with GIS in Balcı Planning Unit of the years 1984 and 2006. Following that the related basic maps are produced. Timely diversity of the planning unit of 20 years is put forward comparatively with regard to the stand parameters such as tree types, age class, development stage, canopy closure, mixture, volume and increment.

Full Text Available Abstract Background African American women are at increased risk for poor pregnancy outcomes compared to other racial-ethnic groups. Single or multiple psychosocial and behavioral factors may contribute to this risk. Most interventions focus on singular risks. This paper describes the design, implementation, challenges faced, and acceptability of a behavioral counseling intervention for low income, pregnant African American women which integrated multiple targeted risks into a multi-component format. Methods Six academic institutions in Washington, DC collaborated in the development of a community-wide, primary care research study, DC-HOPE, to improve pregnancy outcomes. Cigarette smoking, environmental tobacco smoke exposure, depression and intimate partner violence were the four risks targeted because of their adverse impact on pregnancy. Evidence-based models for addressing each risk were adapted and integrated into a multiple risk behavior intervention format. Pregnant women attending six urban prenatal clinics were screened for eligibility and risks and randomized to intervention or usual care. The 10-session intervention was delivered in conjunction with prenatal and postpartum care visits. Descriptive statistics on risk factor distributions, intervention attendance and length (i.e., with Results Forty-eight percent of women screened were eligible based on presence of targeted risks, 76% of those eligible were enrolled, and 79% of those enrolled were retained postpartum. Most women reported a single risk factor (61%; 39% had multiple risks. Eighty-four percent of intervention women attended at least one session (60% attended ≥ 4 sessions without disruption of clinic scheduling. Specific risk factor content was delivered as prescribed in 80% or more of the sessions; 78% of sessions were fully completed (where all required risk content was covered. Ninety-three percent of the subsample of intervention women had a positive view of their

A novel method has been developed for generating quasi-realistic voxel phantoms which simulate the compressed breast in mammography and digital breast tomosynthesis (DBT). The models are suitable for use in virtual clinical trials requiring realistic anatomy which use the multiple alternative forced choice (AFC) paradigm and patches from the complete breast image. The breast models are produced by extracting features of breast tissue components from DBT clinical images including skin, adipose and fibro-glandular tissue, blood vessels and Cooper's ligaments. A range of different breast models can then be generated by combining these components. Visual realism was validated using a receiver operating characteristic (ROC) study of patches from simulated images calculated using the breast models and from real patient images. Quantitative analysis was undertaken using fractal dimension and power spectrum analysis. The average areas under the ROC curves for 2D and DBT images were 0.51 ± 0.06 and 0.54 ± 0.09 demonstrating that simulated and real images were statistically indistinguishable by expert breast readers (7 observers); errors represented as one standard error of the mean. The average fractal dimensions (2D, DBT) for real and simulated images were (2.72 ± 0.01, 2.75 ± 0.01) and (2.77 ± 0.03, 2.82 ± 0.04) respectively; errors represented as one standard error of the mean. Excellent agreement was found between power spectrum curves of real and simulated images, with average β values (2D, DBT) of (3.10 ± 0.17, 3.21 ± 0.11) and (3.01 ± 0.32, 3.19 ± 0.07) respectively; errors represented as one standard error of the mean. These results demonstrate that radiological images of these breast models realistically represent the complexity of real breast structures and can be used to simulate patches from mammograms and DBT images that are indistinguishable from

Our program is part of a larger project designed to develop multifrequency eddy-current inspection techniques for multilayered conductors with parallel planar boundaries. To reduce the need to specially program each new problem, a family of programs that handle a large class of related problems with only minor editorial and interactive changes were developed. Programs for two types of cylindrical coil probes were developed: the reflection probe, which contains the driver and pickup coils and is used from one side of the specimen, and the through-transmission probe set, which places the driver and pickup coils on opposite sides of the conductor stack. The programs perform the following basic functions: (1) simulation of an ideal instrument's response to specific conductor and defect configurations, (2) control of an eddy-current instrument interfaced to a minicomputer to acquire and record actual instrument responses to test specimens, (3) construction of complex function expansions to relate instrument response to conductor and defect properties by using measured or computed responses and properties, and (4) simulation of a microcomputer on board the instrument by the interfaced minicomputer to test the analytical programming for the microcomputer. The report contains the basic equations for the computations, the main and subroutine programs, instructions for editorial changes and program execution, analyses of the main programs, file requirements, and other miscellaneous aids for the user.

.... This bit pipe is a simple abstraction of the underlying physical and data link layers. There is growing awareness that this simple bit-pipe view is inadequate, particularly in the context of modern wireless data networks...

Due to the tragic accident of radioactive contaminant spread from Fukushima Dai-ichi nuclear power plant, the necessity of unmanned systems for radiation monitoring has been increasing. This paper concerns the flight controller design of an unmanned airplane which has been developed for radiation monitoring around the power plant. The flight controller consists of conventional control elements, i.e. Stability/Control Augmentation System (S/CAS) with PI controllers and guidance loops with PID controllers. The gains in these controllers are designed by minimizing appropriately defined cost functions for several possible models and disturbances to produce structured robust flight controllers. (This method is called as 'multiple model approach'.) Control performance of our flight controller was evaluated through flight tests and a primitive flight of radiation monitoring in Namie-machi in Fukushima prefecture was conducted in Jan. 2014. Flight results are included in this paper. (author)

Ca2+, as a messenger of signal transduction, regulates numerous target molecules via Ca2+-induced conformational changes. Investigation into the determinants for Ca2+-induced conformational change is often impeded by cooperativity between multiple metal-binding sites or protein oligomerization in naturally occurring proteins. To dissect the relative contributions of key determinants for Ca2+-dependent conformational changes, we report the design of a single-site Ca2+-binding protein (CD2.trigger) created by altering charged residues at an electrostatically sensitive location on the surface of the host protein rat Cluster of Differentiation 2 (CD2).CD2.trigger binds to Tb3+ and Ca2+ with dissociation constants of 0.3 +/- 0.1 and 90 +/- 25 microM, respectively. This protein is largely unfolded in the absence of metal ions at physiological pH, but Tb3+ or Ca2+ binding results in folding of the native-like conformation. Neutralization of the charged coordination residues, either by mutation or protonation, similarly induces folding of the protein. The control of a major conformational change by a single Ca2+ ion, achieved on a protein designed without reliance on sequence similarity to known Ca2+-dependent proteins and coupled metal-binding sites, represents an important step in the design of trigger proteins.

Scholars representing the field of design were asked to identify what they considered to be the most exciting and imaginative work currently being done in their field, as well as how that work might change our understanding. The scholars included Richard Buchanan, Nigel Cross, David Durling, Harold Nelson, Charles Owen, and Anna Valtonen. Scholars…

In this chapter, Ole B. Jensen takes a situational approach to mobilities to examine how ordinary life activities are structured by technology and design. Using “staging mobilities” as a theoretical approach, Jensen considers mobilities as overlapping, actions, interactions and decisions by desig...... by providing ideas about future research for investigating mobilities in situ as a kind of “staging,” which he notes is influenced by the “material turn” in social sciences....... with a brief description of how movement is studied within social sciences after the “mobilities turn” versus the idea of physical movement in transport geography and engineering. He then explains how “mobilities design” was derived from connections between traffic and architecture. Jensen concludes...

Fingolimod (Gilenya) is an oral medication for patients with highly active relapsing-remitting Multiple Sclerosis (RRMS). Clinical trials and post-marketing experience on more than 114,000 patients have established a detailed safety profile. Total patient exposure now exceeds 195,000 patient-years as stated in the last financial report (Dec 2014) of the Novartis Pharma AG, Basel, Switzerland. However, less is known about the safety of long-term fingolimod use in daily practice. Here, we describe the study design of PANGAEA (Post-Authorization Non-interventional German sAfety of GilEnyA in RRMS patients), a prospective, multicenter, non-interventional, long-term study to collect safety, efficacy, and pharmacoeconomic data on RRMS patients treated with fingolimod (0.5 mg/daily) under real-world conditions in Germany. PANGAEA is striving to assess a real-world safety and efficacy profile of fingolimod, based on data from 4,000 RRMS patients, obtained during a 60-month observational phase. A pharmacoeconomic sub-study of 800 RRMS patients further collects patient-reported outcome measures of disability, quality of life, compliance, treatment satisfaction, and usage of resources during a 24-month observational phase. Descriptive statistical analyses of the safety set as well as of stratified subgroups such as patients with concomitant diabetes mellitus and pretreated patients (e.g., natalizumab) will be conducted. PANGAEA seeks to confirm the current safety profile of fingolimod obtained in phase I-III clinical trials. The study design presented here will additionally provide guidance on the therapeutic use of fingolimod in clinical practice and possibly assists physicians in making evidence-based decisions.

Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.

Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.

Full Text Available People experience things from their own physical point of view. What they see is usually a function of where they are and what physical attitude they adopt relative to the subject. With augmented vision (periscopes, mirrors, remote cameras, etc we are able to see things from places where we are not present. With time-shifting technologies, such as the video recorder, we can also see things from the past; a time and a place we may never have visited.In recent artistic work I have been exploring the implications of digital technology, interactivity and internet connectivity that allow people to not so much space/time-shift their visual experience of things but rather see what happens when everybody is simultaneously able to see what everybody else can see. This is extrapolated through the remote networking of sites that are actual installation spaces; where the physical movements of viewers in the space generate multiple perspectives, linked to other similar sites at remote locations or to other viewers entering the shared data-space through a web based version of the work.This text explores the processes involved in such a practice and reflects on related questions regarding the non-singularity of being and the sense of self as linked to time and place.

Background A considerable amount of resource allocation decisions take place daily at the point of the clinical encounter; especially in primary care, where 80 percent of health problems are managed. Ignoring economic evaluation evidence in individual clinical decision-making may have a broad impact on the efficiency of health services. To date, almost all studies on the use of economic evaluation in decision-making used a quantitative approach, and few investigated decision-making at the clinical level. An important question is whether economic evaluations affect clinical practice. The project is an intervention research study designed to understand the role of economic evaluation in the decision-making process of family physicians (FPs). The contributions of the project will be from the perspective of Pierre Bourdieu's sociological theory. Methods/design A qualitative research strategy is proposed. We will conduct an embedded multiple-case study design. Ten case studies will be performed. The FPs will be the unit of analysis. The sampling strategies will be directed towards theoretical generalization. The 10 selected cases will be intended to reflect a diversity of FPs. There will be two embedded units of analysis: FPs (micro-level of analysis) and field of family medicine (macro-level of analysis). The division of the determinants of practice/behaviour into two groups, corresponding to the macro-structural level and the micro-individual level, is the basis for Bourdieu's mode of analysis. The sources of data collection for the micro-level analysis will be 10 life history interviews with FPs, documents and observational evidence. The sources of data collection for the macro-level analysis will be documents and 9 open-ended, focused interviews with key informants from medical associations and academic institutions. The analytic induction approach to data analysis will be used. A list of codes will be generated based on both the original framework and new themes

Full Text Available Abstract Background A considerable amount of resource allocation decisions take place daily at the point of the clinical encounter; especially in primary care, where 80 percent of health problems are managed. Ignoring economic evaluation evidence in individual clinical decision-making may have a broad impact on the efficiency of health services. To date, almost all studies on the use of economic evaluation in decision-making used a quantitative approach, and few investigated decision-making at the clinical level. An important question is whether economic evaluations affect clinical practice. The project is an intervention research study designed to understand the role of economic evaluation in the decision-making process of family physicians (FPs. The contributions of the project will be from the perspective of Pierre Bourdieu's sociological theory. Methods/design A qualitative research strategy is proposed. We will conduct an embedded multiple-case study design. Ten case studies will be performed. The FPs will be the unit of analysis. The sampling strategies will be directed towards theoretical generalization. The 10 selected cases will be intended to reflect a diversity of FPs. There will be two embedded units of analysis: FPs (micro-level of analysis and field of family medicine (macro-level of analysis. The division of the determinants of practice/behaviour into two groups, corresponding to the macro-structural level and the micro-individual level, is the basis for Bourdieu's mode of analysis. The sources of data collection for the micro-level analysis will be 10 life history interviews with FPs, documents and observational evidence. The sources of data collection for the macro-level analysis will be documents and 9 open-ended, focused interviews with key informants from medical associations and academic institutions. The analytic induction approach to data analysis will be used. A list of codes will be generated based on both the original

The theoretical work presented in this manuscript addresses two complementary issues in coherent atom optics. The first part addresses the perspectives offered by coherent atomic sources through the design of two experiment involving the levitation of a cold atomic sample in a periodic series of light pulses, and for which coherent atomic clouds are particularly well-suited. These systems appear as multiple wave atom interferometers. A striking feature of these experiments is that a unique system performs both the sample trapping and interrogation. To obtain a transverse confinement, a novel atomic lens is proposed, relying on the interaction between an atomic wave with a spherical light wave. The sensitivity of the sample trapping towards the gravitational acceleration and towards the pulse frequencies is exploited to perform the desired measurement. These devices constitute atomic wave resonators in momentum space, which is a novel concept in atom optics. A second part develops new theoretical tools - most of which inspired from optics - well-suited to describe the propagation of coherent atomic sources. A phase-space approach of the propagation, relying on the evolution of moments, is developed and applied to study the low-energy dynamics of Bose-Einstein condensates. The ABCD method of propagation for atomic waves is extended beyond the linear regime to account perturbatively for mean-field atomic interactions in the atom-optical aberration-less approximation. A treatment of the atom laser extraction enabling one to describe aberrations in the atomic beam, developed in collaboration with the Atom Optics group at the Institute of Optics, is exposed. Last, a quality factor suitable for the characterization of diluted matter waves in a general propagation regime has been proposed. (author)

At this pivotal moment in time, when the proliferation of mobile technologies in our daily lives is influencing the relatively fast integration of these technologies into classrooms, there is little known about the process of student learning, and the role of collaboration, with app-based learning environments on mobile devices. To address this gap, this dissertation, comprised of three manuscripts, investigated three pairs of sixth grade students' synchronous collaborative use of a tablet-based science app called WeInvestigate . The first paper illustrated the methodological decisions necessary to conduct the study of student synchronous and face-to-face collaboration and knowledge building within the complex WeInvestigate and classroom learning environments. The second paper provided the theory of collaboration that guided the design of supports in WeInvestigate, and described its subsequent development. The third paper detailed the interactions between pairs of students as they engaged collaboratively in model construction and explanation tasks using WeInvestigate, hypothesizing connections between these interactions and the designed supports for collaboration. Together, these manuscripts provide encouraging evidence regarding the potential of teaching and learning with WeInvestigate. Findings demonstrated that the students in this study learned science through WeInvestigate , and were supported by the app - particularly the collabrification - to engage in collaborative modeling of phenomena. The findings also highlight the potential of the multiple methods used in this study to understand students' face-to-face and technology-based interactions within the "messy" context of an app-based learning environment and a traditional K-12 classroom. However, as the third manuscript most clearly illustrates, there are still a number of modifications to be made to the WeInvestigate technology before it can be optimally used in classrooms to support students' collaborative

Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…

The logic diagram principle of operation and some details of the design of the multiplicity logic unit are presented. This unit was specially designed to fulfil the requirements of a multidetector arrangement for gamma-ray multiplicity measurements. The unit is equipped with 16 inputs controlled by a common coincidence gate. It delivers a linear output pulse with the height proportional to the multiplicity of coincidences and logic pulses corresponding to 0, 1, ... up to >= 5-fold coincidences. These last outputs are used to steer the routing unit working with the multichannel analyser. (orig.)

this assay to analyze DNA from tumor tissue and corresponding urine samples from patients with bladder cancer. Our data show that the use of multiple short synthetic probes provides a simple means for custom-designed MS-MLPA analysis. (J Mol Diagn 2010, 12:402-408; DOI: 10.2353/jmoldx.2010.090152)...

New York City Board of Education, Brooklyn, NY. Office of Bilingual Education.

This manual incorporates a Multiple Intelligences perspective into its presentation of themes and lesson ideas for Spanish-English bilingual elementary school students in grades 4-8 and is designed for both gifted and special education uses. Each unit includes practice activities, semantic maps to illustrate and help organize ideas as well as…

Full Text Available Purpose. The development of complicated techniques of production and management processes, information systems, computer science, applied objects of systems theory and others requires improvement of mathematical methods, new approaches for researches of application systems. And the variety and diversity of subject systems makes necessary the development of a model that generalizes the classical sets and their development – sets of sets. Multiple objects unlike sets are constructed by multiple structures and represented by the structure and content. The aim of the work is the analysis of multiple structures, generating multiple objects, the further development of operations on these objects in application systems. Methodology. To achieve the objectives of the researches, the structure of multiple objects represents as constructive trio, consisting of media, signatures and axiomatic. Multiple object is determined by the structure and content, as well as represented by hybrid superposition, composed of sets, multi-sets, ordered sets (lists and heterogeneous sets (sequences, corteges. Findings. In this paper we study the properties and characteristics of the components of hybrid multiple objects of complex systems, proposed assessments of their complexity, shown the rules of internal and external operations on objects of implementation. We introduce the relation of arbitrary order over multiple objects, we define the description of functions and display on objects of multiple structures. Originality.In this paper we consider the development of multiple structures, generating multiple objects.Practical value. The transition from the abstract to the subject of multiple structures requires the transformation of the system and multiple objects. Transformation involves three successive stages: specification (binding to the domain, interpretation (multiple sites and particularization (goals. The proposed describe systems approach based on hybrid sets

The Chinese government has increased the funding for public health in 2009 and experimentally applied a contract service policy (could be seen as a counterpart to family medicine) in 15 counties to promote public health services in the rural areas in 2013. The contract service aimed to convert village doctors, who had privately practiced for decades, into general practitioners under the government management, and better control the rampant chronic diseases. This study made a rare attempt to assess the effectiveness of public health services delivered under the contract service policy, explore the influencing mechanism and draw the implications for the policy extension in the future. Three pilot counties and a non-pilot one with heterogeneity in economic and health development from east to west of China were selected by a purposive sampling method. The case study methods by document collection, non-participant observation and interviews (including key informant interview and focus group interview) with 84 health providers and 20 demanders in multiple level were applied in this study. A thematic approach was used to compare diverse outcomes and analyze mechanism in the complex adaptive systems framework. Without sufficient incentives, the public health services were not conducted effectively, regardless of the implementation of the contract policy. To appropriately increase the funding for public health by local finance and properly allocate subsidy to village doctors was one of the most effective approaches to stimulate health providers and demanders' positivity and promote the policy implementation. County health bureaus acted as the most crucial agents among the complex public health systems. Their mental models influenced by the compound and various environments around them led to the diverse outcomes. If they could provide extra incentives and make the contexts of the systems ripe enough for change, the health providers and demanders would be receptive to the

Full Text Available Abstract Background Case management has been suggested as an innovative strategy that facilitates the improvement of a patient's quality of life, reduction of hospital length of stay, optimization of self-care and improvement of satisfaction of patients and professionals involved. However, there is little evidence about the effectiveness of the patient advocacy case management model in clinical practice. Therefore, the objective of our study was to examine the effects of the Dutch patient advocacy case management model for severely disabled Multiple Sclerosis (MS patients and their caregivers compared to usual care. Methods/design In this randomized controlled trial the effectiveness of casemanagement on quality of life of patients and their caregivers, quality of care, service use and economic aspects were evaluated. The primary outcomes of this study were quality of life of MS-patients and caregiver burden of caregivers. Furthermore, we examined quality of life of caregivers, quality of care, service use and costs. Discussion This is a unique trial in which we examined the effectiveness of case management from a broad perspective. We meticulously prepared this study and applied important features and created important conditions for both intervention and research protocol to increase the likelihood of finding evidence for the effectiveness of patient advocacy case management. Concerning the intervention we anticipated to five important conditions: 1 the contrast between the case management intervention compared to the usual care seems to be large enough to detect intervention effects; 2 we included patients with complex care situations and/or were at risk for critical situations; 3 the case managers were familiar with disease specific health-problems and a broad spectrum of solutions; 4 case managers were competent and authorized to perform a medical neurological examination and worked closely with neurologists specialized in MS; and 5 the

This book introduces the concept of autonomic computing driven cooperative networked system design from an architectural perspective. As such it leverages and capitalises on the relevant advancements in both the realms of autonomic computing and networking by welding them closely together. In particular, a multi-faceted Autonomic Cooperative System Architectural Model is defined which incorporates the notion of Autonomic Cooperative Behaviour being orchestrated by the Autonomic Cooperative Networking Protocol of a cross-layer nature. The overall proposed solution not only advocates for the inc

Multiple sclerosis is the most common chronic inflammatory disease of myelin with interspersed lesions in the white matter of the central nervous system. Magnetic resonance imaging (MRI) plays a key role in the diagnosis and monitoring of white matter diseases. This article focuses on key findings in multiple sclerosis as detected by MRI. (orig.) [de

Gardner's Multiple Intelligences Theory (MIT) can be a cognitive and emotional improvement if is taken into account in the standard development of the Technology lessons. This work presents a preliminary evaluation of the performance enhancement in two concomitant aspects: contents acquisition and emotional yield. The study was made on up to 150…

Dry powder inhalation of antibiotics in cystic fibrosis (CF) therapy may be a valuable alternative for wet nebulisation, because it saves time and it improves lung deposition. In this study, it is shown that the use of multiple air classifier technology enables effective dispersion of large amounts

A study of multiple homicides or multiple deaths involving a solitary incident of violence by another individual was performed on the case files of the Office of the Medical Examiner of Metropolitan Dade County in Miami, Florida, during 1983-1987. A total of 107 multiple homicides were studied: 88 double, 17 triple, one quadruple, and one quintuple. The 236 victims were analyzed regarding age, race, sex, cause of death, toxicologic data, perpetrator, locale of the incident, and reason for the incident. This article compares this type of slaying with other types of homicide including those perpetrated by serial killers. Suggestions for future research in this field are offered.

Multiple sclerosis (MS) is a nervous system disease that affects your brain and spinal cord. It damages the myelin sheath, the material that surrounds and protects your nerve cells. This damage slows down ...

Advances in the imaging and treatment of multiple myeloma have occurred over the past decade. This article summarises the current status and highlights how an understanding of both is necessary for optimum management.

... with multiple mononeuropathy are prone to new nerve injuries at pressure points such as the knees and elbows. They should avoid putting pressure on these areas, for example, by not leaning on the elbows, crossing the knees, ...

input/output relationship. These are obtained from the design specifications (10:68i-684). Note that the first digit of the subscript of bkj refers...to the output and the second digit to the input. Thus, bkj is.a function of the response requirements on the output, Yk’ due to the input, r.. 169 . A...NXPMAX pNYPMAX, IPLOT) C C C* LIBARY OF PLOT SUBR(OUTINES PSNTCT NLIEPRINTER ONLY~ C* C C C SUP’ LPLOTS C C C DIMENSION IXY(101,71)918UF(100) COMMON /HOPY

Improvements in performance and approval obtained by first year engineering students from University of Concepcion, Chile, were studied, once a virtual didactic model of multiple-choice exam, was implemented. This virtual learning resource was implemented in the Web ARCO platform and allows training, by facing test models comparable in both time and difficulty to those that they will have to solve during the course. It also provides a feedback mechanism for both: 1) The students, since they c...

In a cross-sectional study of 117 randomly selected patients (52 men, 65 women) with definite multiple sclerosis, it was found that 76 percent were married or cohabitant, 8 percent divorced. Social contacts remained unchanged for 70 percent, but outgoing social contacts were reduced for 45 percent......, need for structural changes in home and need for pension became greater with increasing physical handicap. No significant differences between gender were found. It is concluded that patients and relatives are under increased social strain, when multiple sclerosis progresses to a moderate handicap...

Multiple myeloma is a malignant plasma cell tumor that is thought to originate proliferation of a single clone of abnormal plasma cell resulting production of a whole monoclonal paraprotein. The authors experienced a case of multiple myeloma with severe mandibular osteolytic lesions in 46-year-old female. As a result of careful analysis of clinical, radiological, histopathological features, and laboratory findings, we diagnosed it as multiple myeloma, and the following results were obtained. 1. Main clinical symptoms were intermittent dull pain on the mandibular body area, abnormal sensation of lip and pain due to the fracture on the right clavicle. 2. Laboratory findings revealed M-spike, reversed serum albumin-globulin ratio, markedly elevated ESR and hypercalcemia. 3. Radiographically, multiple osteolytic punched-out radiolucencies were evident on the skull, zygoma, jaw bones, ribs, clavicle and upper extremities. Enlarged liver and increased uptakes on the lesional sites in RN scan were also observed. 4. Histopathologically, markedly hypercellular marrow with sheets of plasmoblasts and megakaryocytes were also observed.

Forty-two (12%) of a total of 366 patients with multiple sclerosis (MS) had psychiatric admissions. Of these, 34 (81%) had their first psychiatric admission in conjunction with or after the onset of MS. Classification by psychiatric diagnosis showed that there was a significant positive correlation...

In a cross-sectional investigation of 116 patients with multiple sclerosis, the social and sparetime activities of the patient were assessed by both patient and his/her family. The assessments were correlated to physical disability which showed that particularly those who were moderately disabled...

An investigation on the correlation between ability to read TV subtitles and the duration of visual evoked potential (VEP) latency in 14 patients with definite multiple sclerosis (MS), indicated that VEP latency in patients unable to read the TV subtitles was significantly delayed in comparison...

In a cross-sectional study of 94 patients (42 males, 52 females) with definite multiple sclerosis (MS) in the age range 25-55 years, the correlation of neuropsychological tests with the ability to read TV-subtitles and with the use of sedatives is examined. A logistic regression analysis reveals...

This module on multiple sclerosis is intended for use in inservice or continuing education programs for persons who administer medications in long-term care facilities. Instructor information, including teaching suggestions, and a listing of recommended audiovisual materials and their sources appear first. The module goal and objectives are then…

... when your babies do. Though it can be hard to let go of the thousand other things you need to do, remember that your well-being is key to your ability to take care of your babies. What Problems Can Happen? It may be hard to tell multiple babies apart when they first ...

Purpose: To develop a high-throughput, cost-effective diagnostic strategy for the identification of known and new mutations in 90 retinal disease genes. Design: Evidence-based study. Participants: Sixty patients with a variety of retinal disorders, including Leber's congenital amaurosis, ocular

Full Text Available Radio frequency identification (RFID technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot.

A lift fan exhaust suppression system to meet future VTOL aircraft noise goals was designed and tested. The test vehicle was a 1.3 pressure ratio, 36 inch (91.44 cm) diameter lift fan with two chord rotor to stator spacing. A two splitter fan exhaust suppression system thirty inches (76.2 cm) long achieved 10 PNdB exhaust suppression in the aft quadrant compared to a design value of 20 PNdB. It was found that a broadband noise floor limited the realizable suppression. An analytical investigation of broadband noise generated by flow over the treatment surfaces provided very good agreement with the measured suppression levels and noise floor sound power levels. A fan thrust decrement of 22% was measured for the fully suppressed configuration of which 11.1% was attributed to the exhaust suppression hardware.

A distillation unit has been designed for a capacity of 73 t/h of condensate and for at least 90% recovery of the contaminating organics. This unit consists of three columns: a primary stripper to remove volatile organics and two upgrading columns to purify the methanol and furfural byproducts. Three different energy-saving alternatives for satisfying the energy requirements have been studied: utilisation of secondary steam from the evaporation plant, and application of the principle of multi-effect distillation in one-stripper and in two-stripper configurations. Investment cost needed in all alternatives amounts to 5.5 to 6.0 MCr (millions of Swedish Crowns) while operating cost varies between 0.8 to 3.1 MCr. The first design alternative has a payoff period of 2.3 years while the two multi-effect distillation alternatives have a payoff period of about 3 years.

Eleven patients with a definite diagnosis of multiple sclerosis were examined in terms of correlations between the clinical features and the results of cranial computed tomography (CT), and magnetic resonance imaging (MRI). Results: In 5 of the 11 patients, both CT and MRI demonstrated lesions consistent with a finding of multiple sclerosis. In 3 patients, only MRI demonstrated lesions. In the remaining 3 patients, neither CT nor MRI revealed any lesion in the brain. All 5 patients who showed abnormal findings on both CT and MRI had clinical signs either of cerebral or brainstem - cerebellar lesions. On the other hand, two of the 3 patients with normal CT and MRI findings had optic-nerve and spinal-cord signs. Therefore, our results suggested relatively good correlations between the clinical features, CT, and MRI. MRI revealed cerebral lesions in two of the four patients with clinical signs of only optic-nerve and spinal-cord lesions. MRI demonstrated sclerotic lesions in 3 of the 6 patients whose plaques were not detected by CT. In conclusion, MRI proved to be more helpful in the demonstration of lesions attributable to chronic multiple sclerosis. (author)

In different fungal and algal species, the intracellular concentration of reduced glutathione (GSH) correlates closely with their susceptibility to killing by the small molecule alkylating agent 3-bromopyruvate (3BP). Additionally, in the case of Cryptococcus neoformans cells 3BP exhibits a synergistic effect with buthionine sulfoximine (BSO), a known GSH depletion agent. This effect was observed when 3BP and BSO were used together at concentrations respectively of 4-5 and almost 8 times lower than their Minimal Inhibitory Concentration (MIC). Finally, at different concentrations of 3BP (equal to the half-MIC, MIC and double-MIC in a case of fungi, 1 mM and 2.5 mM for microalgae and 25, 50, 100 μM for human multiple myeloma (MM) cells), a significant decrease in GSH concentration is observed inside microorganisms as well as tumor cells. In contrast to the GSH concentration decrease, the presence of 3BP at concentrations corresponding to sub-MIC values or half maximal inhibitory concentration (IC50) clearly results in increasing the expression of genes encoding enzymes involved in the synthesis of GSH in Cryptococcus neoformans and MM cells. Moreover, as shown for the first time in the MM cell model, the drastic decrease in the ATP level and GSH concentration and the increase in the amount of ROS caused by 3BP ultimately results in cell death.

An analytical expression for calculating the reflection-peak wavelengths (RPWs) of a uniform sampled fiber Bragg grating (SFBG) with the multiple-phase-shift (MPS) technique is derived through Fourier transform of the index modulation. The new expression can accurately depict the RPWs incorporating various parameters such as the duty cycle and the DC index change. The effectiveness of the derived expression is further confirmed by comparing the RPWs estimated from the expression with the simulated reflective spectra using the piecewise uniform method. And the reflective spectrum has been well optimized by introducing the Gaussian apodization function to suppress the sidelobes without any wavelength shift on the RPWs. Then, a high-channel-count comb filter based on MPS is proposed by cascading two or more SFBGs with different Bragg periods but with the same RPWs. Noticeably, the RPWs of the new structured SFBG can also be accurately calculated through the expression. Furthermore, the number of spectral channels can be controlled by choosing gratings with specified difference Bragg periods.

We propose a randomized controlled trial (RCT) examining the feasibility of square-stepping exercise (SSE) delivered as a home-based program for older adults with multiple sclerosis (MS). We will assess feasibility in the four domains of process, resources, management and scientific outcomes. The trial will recruit older adults (aged 60 years and older) with mild-to-moderate MS-related disability who will be randomized into intervention or attention control conditions. Participants will complete assessments before and after completion of the conditions delivered over a 12-week period. Participants in the intervention group will have biweekly meetings with an exercise trainer in the Exercise Neuroscience Research Laboratory and receive verbal and visual instruction on step patterns for the SSE program. Participants will receive a mat for home-based practice of the step patterns, an instruction manual, and a logbook and pedometer for monitoring compliance. Compliance will be further monitored through weekly scheduled Skype calls. This feasibility study will inform future phase II and III RCTs that determine the actual efficacy and effectiveness of a home-based exercise program for older adults with MS.

A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

It remains an unsolved problem to quantify a natural microbial community by rapidly and conveniently measuring multiple species with functional significance. Most widely used high throughput next-generation sequencing methods can only generate information mainly for genus-level taxonomic identification and quantification, and detection of multiple species in a complex microbial community is still heavily dependent on approaches based on near full-length ribosome RNA gene or genome sequence information. In this study, we used near full-length rRNA gene library sequencing plus Primer-Blast to design species-specific primers based on whole microbial genome sequences. The primers were intended to be specific at the species level within relevant microbial communities, i.e., a defined genomics background. The primers were tested with samples collected from the Daqu (also called fermentation starters) and pit mud of a traditional Chinese liquor production plant. Sixteen pairs of primers were found to be suitable for identification of individual species. Among them, seven pairs were chosen to measure the abundance of microbial species through quantitative PCR. The combination of near full-length ribosome RNA gene library sequencing and Primer-Blast may represent a broadly useful protocol to quantify multiple species in complex microbial population samples with species-specific primers.

The purpose of this study was to investigate the interrelationships among student's demographics, attitudes toward the Unified Modeling Language (UML), general self-efficacy, and multiple intelligence (MI) profiles, and the use of UML to develop software. The dependent measures were course grades and course project scores. The study was grounded in problem solving theory, self-efficacy theory, and multiple intelligence theory. The sample was an intact class of 18 students who took the junior-level Software Design Methods course, CSE 3421, at Florida Institute of Technology in the Spring 2008 semester. The course incorporated instruction in UML with Java. Attitudes were measured by a researcher-modified instrument derived from the Computer Laboratory Survey by Newby and Fisher, and self-efficacy was measured by the Generalized Self-Efficacy Scale developed by Schwarzer and Jerusalem. MI profiles, which were the proportion of Gardner's eight intelligences, were determined from Shearer's Multiple Intelligence Developmental Assessment Scales. Results from a hierarchical multiple regression analysis showed that only the collective set of MI profiles was significant, but none of the individual intelligences were significant. The study's findings supported what one would expect to find relative to problem solving theory, but were contradictory to self-efficacy theory. The findings also supported Gardner's concept that multiple intelligences must be considered as an integral unit and the importance of not focusing on an individual intelligence. The findings imply that self-efficacy is not a major consideration for a software design methods class that requires a transition to problem solving strategy and suggest that the instructor was instrumental in fostering positive attitudes toward UML. Recommendations for practice include (1) teachers should not be concerned with focusing on a single intelligence simply because they believe one intelligence might be more aligned to a

/value – The original contribution is in demonstrating how plural futures and the singular future co-exist in practice. Thus, an eclipse of the future by futures can only ever be partial. For “futures” to be conceptually potent, “the future” must be at least provisionally believable and occasionally useful. Otherwise......, if “the future” were so preposterous an idea, then “futures” would cease to be a critical alternative to it. Futures needs the future; they are relationally bound together in a multiplicity. This paper considers what such a logical reality implies for a field that distances itself from the future and self......). Multiplicity, as a post-ANT sensibility, helps one make sense of the empirical materials. This paper examines the possibility that rather than being alternatives to one another, plural futures and the singular future might co-exist in practice, and, thus, constitute a multiplicity. Design...

The Theory of Inflation, namely, that at some point the entropy content of the universe was greatly increased, has much promise. It may solve the puzzles of homogeneity and the creation of structure. However, no particle physics model has yet been found that can successfully drive inflation. The difficulty in satisfying the constraint that the isotropy of the microwave background places on the effective potential of prospective models is immense. In this work we have codified the requirements of such models in a most general form. We have carefully calculated the amounts of inflation the various problems of the Standard Model need for their solution. We have derived a completely model independent upper bond on the inflationary Hubble parameter. We have developed a general notation with which to probe the possibilities of Multiple Inflation. We have shown that only in very unlikely circumstances will any evidence of an earlier inflation, survive the de Sitter period of its successor. In particular, it is demonstrated that it is most unlikely that two bouts of inflation will yield high amplitudes of density perturbations on small scales and low amplitudes on large. We conclude that, while multiple inflation will be of great theoretical interest, it is unlikely to have any observational impact

The global response to the rise in prevalence of chronic disease is a focus on the way services are managed and delivered, in which nurses are seen as central in shaping patient experience. However, there is relatively little known on how patients perceive the changes to service delivery envisaged by chronic care models. The PEARLE project aimed to explore, identify and characterise the origins, processes and outcomes of effective chronic disease management models and the nursing contributions to the models. Design, settings and participants Case study design of seven sites in England and Wales ensuring a range of chronic disease management models. Participants included over ninety patients and family carers ranging in age from children to older people with conditions such as diabetes, respiratory disease, epilepsy, or coronary heart disease. Semi-structured interviews with patients and family carers. Focus groups were conducted with adolescents and children. A whole systems approach guided data collection and data were thematically analysed. Despite nurses' role and skill development and the shift away from the acute care model, the results suggested that patients had a persisting belief in the monopoly of expertise continuing to exist in the acute care setting. Patients were more satisfied if they saw the nurse as diagnostician, prescriber and medical manager of the condition. Patients were less satisfied when they had been transferred from an established doctor-led to nurse-led service. While nurses within the study were highly skilled, patient perception was guided by the familiar rather than most appropriate service delivery. Most patients saw chronic disease management as a medicalised approach and the nursing contribution was most valued when emulating it. Patients' preferences and expectations of chronic disease management were framed by a strongly biomedical discourse. Perceptions of nurse-led chronic disease management were often shaped by what was

Recent industry-academic partnerships involve collaboration across disciplines, locations, and organizations using publicly funded “open-access” and proprietary commercial data sources. These require effective integration of chemical and biological information from diverse data sources, presenting key informatics, personnel, and organizational challenges. BARD (BioAssay Research Database) was conceived to address these challenges and to serve as a community-wide resource and intuitive web portal for public-sector chemical biology data. Its initial focus is to enable scientists to more effectively use the NIH Roadmap Molecular Libraries Program (MLP) data generated from 3-year pilot and 6-year production phases of the Molecular Libraries Probe Production Centers Network (MLPCN), currently in its final year. BARD evolves the current data standards through structured assay and result annotations that leverage the BioAssay Ontology (BAO) and other industry-standard ontologies, and a core hierarchy of assay definition terms and data standards defined specifically for small-molecule assay data. We have initially focused on migrating the highest-value MLP data into BARD and bringing it up to this new standard. We review the technical and organizational challenges overcome by the inter-disciplinary BARD team, veterans of public and private sector data-integration projects, collaborating to describe (functional specifications), design (technical specifications), and implement this next-generation software solution. PMID:24441647

Repeat abortion is a public health concern favored by many obstetric and social factors. The purpose of our study was to identify associated factors to repeated abortion in the region of Monastir (Tunisia). Common mental disorders (CMD) such as anxiety and depression were also evaluated in women seeking voluntary repeated abortion. We carried out a cross sectional study between January and April 2013 in the Reproductive Health Center (RHC) of the region of Monastir in Tunisia (This study is part of a prospective design on mental disorders and intimate partner violence among women seeking abortions in the RHC). Among women referred to the RHC we selected those seeking voluntary abortion (medical or surgical method). Data on women's demographic characters, knowledge and practices about contraceptive methods and abortion were collected the abortion day via a structured questionnaire. Data about anxiety and depression status were evaluated during the post-abortal control visit at 3-4 weeks following pregnancy termination. Of the 500 interviewed women, 211 (42.2 %; CI95% [37.88 - 46.52]) were seeking repeated abortions. Multivariate analysis showed that increased age, lower level of women school education, single status, poor knowledge about birth control methods and history of conflict/abuse by a male partner, were uniquely associated with undergoing repeat compared with initial abortion. CMD were significantly higher in women undergoing second or subsequent abortion (51.1 %) single and lower educated women. Women relating a history of conflicts/abuse report more CMD than others (30.6 % vs 20.8 %). Health facilities providing abortion services need to pay more attention to women seeking repeat abortion. Further studies are needed to well establish the relation between the number of abortion and the occurrence and the severity of CMD.

The health care costs of kidney transplantation and dialysis are generally unknown. This study estimates the Swedish health care costs of kidney transplantation and dialysis over 10 years from a health care perspective. A before-after design was used, in which the patients served as their own controls. Health care costs the year before transplantation were assumed to continue in the absence of a transplant and the cost savings was therefore calculated as the difference between the expected costs and the actual costs during the 10-year follow-up period. Factors associated with the size of the cost savings were studied using ordinary least-squares regression. Altogether 66-79% of the expected health care costs over 10 years were avoided through kidney transplantation, resulting in a cost savings of €380 000 (2012 price-year) per patient. Savings were the highest for successful transplantations, but on average the treatment was cost-saving also for patients who returned to dialysis. No gender or age differences could be found, with the exception of a higher cost of transplantation for children and a generally higher cost for younger compared with older patients on dialysis. A negative association was also found between age at the time of transplantation and the size of the cost savings for the younger part of the sample. Kidney transplantations have led to substantial cost savings for the Swedish health care system. An increase in donated kidneys has the potential to further reduce the cost of renal replacement therapy.

Full Text Available Abstract Background Several RCT studies have aimed to reduce either musculoskeletal disorders, sickness presenteeism, sickness absenteeism or a combination of these among females with high physical work demands. These studies have provided evidence that workplace health promotion (WHP interventions are effective, but long-term effects are still uncertain. These studies either lack to succeed in maintaining intervention effects or lack to document if effects are maintained past a one-year period. This paper describes the background, design and conceptual model of the FRIDOM (FRamed Intervention to Decrease Occupational Muscle pain WHP program among health care workers. A job group characterized by having high physical work demands, musculoskeletal disorders, high sickness presenteeism - and absenteeism. Methods FRIDOM aimed to reduce neck and shoulder pain. Secondary aims were to decrease sickness presenteeism, sickness absenteeism and lifestyle-diseases such as other musculoskeletal disorders as well as metabolic-, and cardiovascular disorders – and to maintain participation to regular physical exercise training, after a one year intervention period. The entire concept was tailored to a population of female health care workers. This was done through a multi-component intervention including 1 intelligent physical exercise training (IPET, dietary advice and weight loss (DAW and cognitive behavioural training (CBT. Discussion The FRIDOM program has the potential to provide evidence-based knowledge of the pain reducing effect of a multi component WHP among a female group of employees with a high prevalence of musculoskeletal disorders and in a long term perspective evaluate the effects on sickness presenteeism and absenteeism as well as risk of life-style diseases. Trial registration NCT02843269 , 06.27.2016 - retrospectively registered.

Several RCT studies have aimed to reduce either musculoskeletal disorders, sickness presenteeism, sickness absenteeism or a combination of these among females with high physical work demands. These studies have provided evidence that workplace health promotion (WHP) interventions are effective, but long-term effects are still uncertain. These studies either lack to succeed in maintaining intervention effects or lack to document if effects are maintained past a one-year period. This paper describes the background, design and conceptual model of the FRIDOM (FRamed Intervention to Decrease Occupational Muscle pain) WHP program among health care workers. A job group characterized by having high physical work demands, musculoskeletal disorders, high sickness presenteeism - and absenteeism. FRIDOM aimed to reduce neck and shoulder pain. Secondary aims were to decrease sickness presenteeism, sickness absenteeism and lifestyle-diseases such as other musculoskeletal disorders as well as metabolic-, and cardiovascular disorders - and to maintain participation to regular physical exercise training, after a one year intervention period. The entire concept was tailored to a population of female health care workers. This was done through a multi-component intervention including 1) intelligent physical exercise training (IPET), dietary advice and weight loss (DAW) and cognitive behavioural training (CBT). The FRIDOM program has the potential to provide evidence-based knowledge of the pain reducing effect of a multi component WHP among a female group of employees with a high prevalence of musculoskeletal disorders and in a long term perspective evaluate the effects on sickness presenteeism and absenteeism as well as risk of life-style diseases. NCT02843269 , 06.27.2016 - retrospectively registered.

Most of the radiolabeled somatostatin analogues (SSAs) are specific for subtype somatostatin receptor 2 (SSTR 2 ). Lack of ligands targeting other subtypes of SSTRs, especially SSTR 1, SSTR 3 , and SSTR 5 , limited their applications in tumors of low SSTR 2 expression, including lung tumor. In this study, we aimed to design and synthesize a positron emission tomography (PET) radiotracer targeting multi-subtypes of SSTRs for PET imaging. PA1 peptide and its conjugate with 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) chelator or fluorescein isothiocyanate (FITC) at the N-terminal of the lysine position were synthesized. 68 Ga was chelated to DOTA-PA1 to obtain 68 Ga-DOTA-PA1 radiotracer. The stability, lipophilicity, binding affinity, and binding specificity of 68 Ga-DOTA-PA1 and FITC-PA1 were evaluated by various in vitro experiments. Micro-PET imaging of 68 Ga-DOTA-PA1 was performed in nude mice bearing A549 lung adenocarcinoma, as compared with 68 Ga-DOTA-(Tyr3)-octreotate ( 68 Ga-DOTA-TATE). Histological analysis of SSTR expression in A549 tumor tissues and human tumor tissues was conducted using immunofluorescence staining and immunohistochemical assay. 68 Ga-DOTA-PA1 had high radiochemical yield and radiochemical purity of over 95% and 99%, respectively. The radiotracer was stable in vitro in different buffers over a 2 h incubation period. Cell uptake of 68 Ga-DOTA-PA1 was 1.31-, 1.33-, and 1.90-fold that of 68 Ga-DOTA-TATE, which has high binding affinity only for SSTR 2 , after 2 h incubation in H520, PG, and A549 lung cancer cell lines, respectively. Micro-PET images of 68 Ga-DOTA-PA1 showed that the PET imaging signal correlated with the total expression of SSTRs, instead of SSTR 2 only, which was measured by Western blotting and immunofluorescence analysis in mice bearing A549 tumors. In summary, a novel PET radiotracer, 68 Ga-DOTA-PA1, targeting multi-subtypes of SSTRs, was successfully synthesized and was confirmed to be useful for PET

Standard radiotherapy is the treatment of first choice in patients with symptomatic spinal metastases, but is only moderately effective. Stereotactic body radiation therapy is increasingly used to treat spinal metastases, without randomized evidence of superiority over standard radiotherapy. The VERTICAL study aims to quantify the effect of stereotactic radiation therapy in patients with metastatic spinal disease. This study follows the ‘cohort multiple Randomized Controlled Trial’ design. The VERTICAL study is conducted within the PRESENT cohort. In PRESENT, all patients with bone metastases referred for radiation therapy are enrolled. For each patient, clinical and patient-reported outcomes are captured at baseline and at regular intervals during follow-up. In addition, patients give informed consent to be offered experimental interventions. Within PRESENT, 110 patients are identified as a sub cohort of eligible patients (i.e. patients with unirradiated painful, mechanically stable spinal metastases who are able to undergo stereotactic radiation therapy). After a protocol amendment, also patients with non-spinal bony metastases are eligible. From the sub cohort, a random selection of patients is offered stereotactic radiation therapy (n = 55), which patients may accept or refuse. Only patients accepting stereotactic radiation therapy sign informed consent for the VERTICAL trial. Non-selected patients (n = 55) receive standard radiotherapy, and are not aware of them serving as controls. Primary endpoint is pain response after three months. Data will be analyzed by intention to treat, complemented by instrumental variable analysis in case of substantial refusal of the stereotactic radiation therapy in the intervention arm. This study is designed to quantify the treatment response after (stereotactic) radiation therapy in patients with symptomatic spinal metastases. This is the first randomized study in palliative care following the cohort multiple Randomized

Full Text Available Reflexive research can be grouped into five clusters with circular relations between two elements x ↔ x, namely circular relations between observers, between scientific building blocks like concepts, theories or models, between systemic levels, between rules and rule systems or as circular relations or x ↔ y between these four components. By far the most important cluster is the second cluster which becomes reflexive through a re-entry operation RE into a scientific element x and which establishes its circular formation as x(x. Many of the research problems in these five clusters in reflexivity research are still unexplored and pose grand challenges for future research.

In light of the many challenges of resource scarcity, climate change, rapid urbanization and changing social patterns facing societies today, main stream architecture remains remarkably 'resilient' to conceptual innovation regarding its nature and role in society. If the idea of open architecture...

Worldwide belt conveyors are used to transport a great variety of bulk solid materials. The desire to carry higher tonnages over longer distances and more diverse routes, while keeping exploitation costs as low as possible, has fuelled many technological advances. An interesting development in the

The security of Android has been recently challenged by the discovery of a number of vulnerabilities involving different layers of the Android stack. We argue that such vulnerabilities are largely related to the interplay among layers composing the Android stack. Thus, we also argue that such interplay has been underestimated from a security point-of-view and a systematic analysis of the Android interplay has not been carried out yet. To this aim, in this paper we provide a simple model of th...

the resource utilization. The algorithm makes decisions based on the channel conditions, the size of the transmission buffers and different quality of service demands. The simulation results show that the new algorithm improves the resource utilization and provides better guaranties for service quality....

Damage assessment plays a very important role in securing enterprise networks and systems. Gaining good awareness about the effects and impact of cyber attack actions would enable security officers to make the right cyber defense decisions and take the right cyber defense actions. A good number of damage assessment techniques have been proposed in the literature, but they typically focus on a single abstraction level (of the software system in concern). As a result, existing damage assessment techniques and tools are still very limited in satisfying the needs of comprehensive damage assessment which should not result in any “blind spots”.

This article describes and illustrates a novel form of the changing criterion design called the distributed criterion design, which represents perhaps the first advance in the changing criterion design in four decades. The distributed criterion design incorporates elements of the multiple baseline and A-B-A-B designs and is well suited to applied…

National Aeronautics and Space Administration — This effort will research and implement advanced Multiple-Input Multiple-Output (MIMO) Synthetic Aperture Radar (SAR) techniques which have the potential to improve...

The body-weight-support treadmill (BWST) is commonly used for gait rehabilitation, but other forms of BWST are in development, such as visual-deprivation BWST (VDBWST). In this study, we compare the effect of VDBWST training and conventional BWST training on spatiotemporal gait parameters for three individuals who had hemiparetic strokes. We used a single-subject experimental design, alternating multiple baselines across the individuals. We recruited three individuals with hemiparesis from stroke; two on the left side and one on the right. For the main outcome measures we assessed spatiotemporal gait parameters using GAITRite, including: gait velocity; cadence; step time of the affected side (STA); step time of the non-affected side (STN); step length of the affected side (SLA); step length of the non-affected side (SLN); step-time asymmetry (ST-asymmetry); and step-length asymmetry (SL-asymmetry). Gait velocity, cadence, SLA, and SLN increased from baseline after both interventions, but STA, ST-asymmetry, and SL-asymmetry decreased from the baseline after the interventions. The VDBWST was significantly more effective than the BWST for increasing gait velocity and cadence and for decreasing ST-asymmetry. VDBWST is more effective than BWST for improving gait performance during the rehabilitation for ground walking.

We investigated a possible causal relation between exposure to organic solvents in Danish workers (housepainters, typographers/printers, carpenters/cabinetmakers) and onset of multiple sclerosis. Data on men included in the Danish Multiple Sclerosis Register (3,241 men) were linked with data from......, and butchers. Over a follow-up period of 20 years, we observed no increase in the incidence of multiple sclerosis among men presumed to be exposed to organic solvents. It was not possible to obtain data on potential confounders, and the study design has some potential for selection bias. Nevertheless......, the study does not support existing hypotheses regarding an association between occupational exposure to organic solvents and multiple sclerosis....

BACKGROUND: Infectious mononucleosis caused by the Epstein-Barr virus has been associated with increased risk of multiple sclerosis. However, little is known about the characteristics of this association. OBJECTIVE: To assess the significance of sex, age at and time since infectious mononucleosis......, and attained age to the risk of developing multiple sclerosis after infectious mononucleosis. DESIGN: Cohort study using persons tested serologically for infectious mononucleosis at Statens Serum Institut, the Danish Civil Registration System, the Danish National Hospital Discharge Register, and the Danish...... Multiple Sclerosis Registry. SETTING: Statens Serum Institut. PATIENTS: A cohort of 25 234 Danish patients with mononucleosis was followed up for the occurrence of multiple sclerosis beginning on April 1, 1968, or January 1 of the year after the diagnosis of mononucleosis or after a negative Paul...

Details concerning the design, fabrication and performance of STAR Photon Multiplicity Detector (PMD) are presented. The PMD will cover the forward region, within the pseudorapidity range 2.3-3.5, behind the forward time projection chamber. It will measure the spatial distribution of photons in order to study collective flow, fluctuation and chiral symmetry restoration.

An immunochemical biosensor assay for the detection of multiple mycotoxins in a sample is described.The inhibition assay is designed to measure four different mycotoxins in a single measurement, following extraction, sample clean-up and incubation with an appropriate cocktail of anti-mycotoxin

solutions to determine sufficiently rational solutions to theproblem of designing closed hydraulic networks, at least at the level of considering the generally accepted criteria asimportant to decide on the network's design, including the determination of the most rational trajectory of the network andthe subjective aspects.The present work is the result of the collaboration among the Studies Center of Computer Aided Design and Manufacturing(CAD/CAM of the University of Holguín “Oscar Lucero Moya”, the Studies Center of Renewable Energy Technology(CETER of the Higher Polytechnical Institute “José Antonio Echeverría” and the Enterprise Group of Investigation,Project and Engineering (GEIPI of the National Institute of Hydraulic Resource.Key Words: CAD, hydraulic networks, multiple criteria.

The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…

A multiple-port valve assembly is designed to direct flow from a primary conduit into any one of a plurality of secondary conduits as well as to direct a reverse flow. The valve includes two mating hemispherical sockets that rotatably receive a spherical valve plug. The valve plug is attached to the primary conduit and includes diverging passageways from that conduit to a plurality of ports. Each of the ports is alignable with one or more of a plurality of secondary conduits fitting into one of the hemispherical sockets. The other hemispherical socket includes a slot for the primary conduit such that the conduit's motion along that slot with rotation of the spherical plug about various axes will position the valve-plug ports in respect to the secondary conduits

Accurate clinical course descriptions (phenotypes) of multiple sclerosis (MS) are important for communication, prognostication, design and recruitment of clinical trials, and treatment decision-making. Standardized descriptions published in 1996 based on a survey of international MS experts...

This volume is important because despite various external representations, such as analogies, metaphors, and visualizations being commonly used by physics teachers, educators and researchers, the notion of using the pedagogical functions of multiple representations to support teaching and learning is still a gap in physics education. The research presented in the three sections of the book is introduced by descriptions of various psychological theories that are applied in different ways for designing physics teaching and learning in classroom settings. The following chapters of the book illustrate teaching and learning with respect to applying specific physics multiple representations in different levels of the education system and in different physics topics using analogies and models, different modes, and in reasoning and representational competence. When multiple representations are used in physics for teaching, the expectation is that they should be successful. To ensure this is the case, the implementati...

Optical-fiber digital communication network to support data-acquisition and control functions of electric-power-distribution networks. Optical-fiber links of communication network follow power-distribution routes. Since fiber crosses open power switches, communication network includes multiple interconnected loops with occasional spurs. At each intersection node is needed. Nodes of communication network include power-distribution substations and power-controlling units. In addition to serving data acquisition and control functions, each node acts as repeater, passing on messages to next node(s). Multiple-ring communication network operates on new AbNET protocol and features fiber-optic communication.

A silicon photonics optical thermometer simultaneously operating on the multiple polarizations is designed and experimentally demonstrated. Measured sensitivities are 86pm/°C and 48pm/°C for the transverse-electric and transverse-magnetic polarizations, respectively.......A silicon photonics optical thermometer simultaneously operating on the multiple polarizations is designed and experimentally demonstrated. Measured sensitivities are 86pm/°C and 48pm/°C for the transverse-electric and transverse-magnetic polarizations, respectively....

Various examples are provided for generalized internal multiple imaging (GIMI). In one example, among others, a method includes generating a higher order internal multiple image using a background Green\\'s function and rendering the higher order internal multiple image for presentation. In another example, a system includes a computing device and a generalized internal multiple imaging (GIMI) application executable in the computing device. The GIMI application includes logic that generates a higher order internal multiple image using a background Green\\'s function and logic that renders the higher order internal multiple image for display on a display device. In another example, a non-transitory computer readable medium has a program executable by processing circuitry that generates a higher order internal multiple image using a background Green\\'s function and renders the higher order internal multiple image for display on a display device.

Performing multiple biopsies during a procedure known as colposcopy—visual inspection of the cervix—is more effective than performing only a single biopsy of the worst-appearing area for detecting cervical cancer precursors. This multiple biopsy approach

... Are Here: Home → Multiple Languages → All Health Topics → Atrial Fibrillation URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Atrial Fibrillation - Multiple Languages To use the sharing features on ...

... Are Here: Home → Multiple Languages → All Health Topics → Zika Virus URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Zika Virus - Multiple Languages To use the sharing features on ...

... Are Here: Home → Multiple Languages → All Health Topics → Elder Abuse URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Elder Abuse - Multiple Languages To use the sharing features on ...

... Are Here: Home → Multiple Languages → All Health Topics → Herbal Medicine URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Herbal Medicine - Multiple Languages To use the sharing features on ...

Various examples are provided for generalized internal multiple imaging (GIMI). In one example, among others, a method includes generating a higher order internal multiple image using a background Green's function and rendering the higher order internal multiple image for presentation. In another example, a system includes a computing device and a generalized internal multiple imaging (GIMI) application executable in the computing device. The GIMI application includes logic that generates a higher order internal multiple image using a background Green's function and logic that renders the higher order internal multiple image for display on a display device. In another example, a non-transitory computer readable medium has a program executable by processing circuitry that generates a higher order internal multiple image using a background Green's function and renders the higher order internal multiple image for display on a display device.

... Are Here: Home → Multiple Languages → All Health Topics → Domestic Violence URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Domestic Violence - Multiple Languages To use the sharing features on ...

... Are Here: Home → Multiple Languages → All Health Topics → Diabetic Foot URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Diabetic Foot - Multiple Languages To use the sharing features on ...

... Are Here: Home → Multiple Languages → All Health Topics → Child Abuse URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Child Abuse - Multiple Languages To use the sharing features on ...

Neutron multiplicity measurements are widely used for nondestructive assay (NDA) of special nuclear material (SNM). When combined with isotopic composition information, neutron multiplicity analysis can be used to estimate the spontaneous fission rate and leakage multiplication of SNM. When combined with isotopic information, the total mass of fissile material can also be determined. This presentation provides an overview of this technique.

We discuss a study to evaluate the extent to which free-response questions could be approximated by multiple-choice equivalents. Two carefully designed research-based multiple-choice questions were transformed into a free-response format and administered on the final exam in a calculus-based introductory physics course. The original multiple-choice questions were administered in another similar introductory physics course on final exam. Findings suggest that carefully designedmultiple-choice...

Leaf area is an important forest structural variable which serves as the primary means of mass and energy exchange within vegetated ecosystems. The objective of the current study was to determine if leaf area index (LAI) could be estimated accurately and consistently in five intensively managed pine plantation forests using two multiple-return airborne LiDAR datasets. Field measurements of LAI were made using the LiCOR LAI2000 and LAI2200 instruments within 116 plots were established of varying size and within a variety of stand conditions (i.e. stand age, nutrient regime and stem density) in North Carolina and Virginia in 2008 and 2013. A number of common LiDAR return height and intensity distribution metrics were calculated (e.g. average return height), in addition to ten indices, with two additional variants, utilized in the surrounding literature which have been used to estimate LAI and fractional cover, were calculated from return heights and intensity, for each plot extent. Each of the indices was assessed for correlation with each other, and was used as independent variables in linear regression analysis with field LAI as the dependent variable. All LiDAR derived metrics were also entered into a forward stepwise linear regression. The results from each of the indices varied from an R2 of 0.33 (S.E. 0.87) to 0.89 (S.E. 0.36). Those indices calculated using ratios of all returns produced the strongest correlations, such as the Above and Below Ratio Index (ABRI) and Laser Penetration Index 1 (LPI1). The regression model produced from a combination of three metrics did not improve correlations greatly (R2 0.90; S.E. 0.35). The results indicate that LAI can be predicted over a range of intensively managed pine plantation forest environments accurately when using different LiDAR sensor designs. Those indices which incorporated counts of specific return numbers (e.g. first returns) or return intensity correlated poorly with field measurements. There were

The Advanced Nuclear Technology Group of the Los Alamos National Laboratory is now using intelligent data-acquisition and analysis instrumentation for determining the multiplication of nuclear material. Earlier instrumentation, such as the large NIM-crate systems, depended on house power and required additional computation to determine multiplication or to estimate error. The portable, battery-powered multiplication measurement unit, with advanced computational power, acquires data, calculates multiplication, and completes error analysis automatically. Thus, the multiplication is determined easily and an available error estimate enables the user to judge the significance of results

Ligation of two oligonucleotide probes hybridized adjacently to a DNA template has been widely used for detection of genome alterations. The multiplex ligation-dependent probe amplification (MLPA) technique allows simultaneous screening of multiple target sequences in a single reaction by using p...

Design Methodology is part of our practice and our knowledge about designing, and it has been strongly supported by the establishing and work of a design research community. The aim of this article is to broaden the reader¿s view of designing and Design Methodology. This is done by sketching...... the development of Design Methodology through time and sketching some important approaches and methods. The development is mainly forced by changing industrial condition, by the growth of IT support for designing, but also by the growth of insight into designing created by design researchers.......ABSTRACT Design Methodology shall be seen as our understanding of how to design; it is an early (emerging late 60ies) and original articulation of teachable and learnable methodics. The insight is based upon two sources: the nature of the designed artefacts and the nature of human designing. Today...

in Fortran 2003. The design unleashes the power of the associated class relationships for modeling complicated data structures yet avoids the ambiguities that plague some multiple inheritance scenarios.

Novices to the design process often struggle at first to understand the various stages of design. Learning to design is a process not easily mastered, and therefore requires multiple levels of exposure to the design process. It is helpful if teachers are able to implement various entry-level design assignments such as reverse-engineering…

Designing Material Materialising Design documents five projects developed at the Centre for Information Technology and Architecture (CITA) at the Royal Danish Academy of Fine Arts, School of Architecture. These projects explore the idea that new designed materials might require new design methods....... Focusing on fibre reinforced composites, this book sustains an exploration into the design and making of elastically tailored architectural structures that rely on the use of computational design to predict sensitive interdependencies between geometry and behaviour. Developing novel concepts...

The commonly used formula of KNO scaling P n =Ψ(n/ ) for descrete distributions (multiplicity distributions) is shown to contradict mathematically the condition ΣP n =1. The effect is essential even at ISR energies. A consistent generalization of the concept of similarity for multiplicity distributions is obtained. The multiplicity distributions of negative particles in PP and also e + e - inelastic interactions are similar over the whole studied energy range. Collider data are discussed. 14 refs.; 8 figs

Purpose – Taking a case study approach, we synchronised two courses to focus on the students working with learning and applying tools in the one course and acting on understandings gained to produce artefacts in the other. Design/methodology/approach – Working with real users throughout all stages...... focused evaluation methods using tangible representations; identified the relationship from these findings for subsequent re-design rationales; and discussed and critiqued each other’s work using multiple feedback, teach-back and discursive strategies. Findings – We found that in-depth coverage...... of material, working with real data and users at all stages of assessment and producing visualisations from evaluations, naturally forced student motivation to act and redesign better solutions. We noted improved attendance and students reported high engagement and content appreciation. Research limitations...

Full Text Available During many years has prevailed the idea of intelligence as a single problem solving ability (factor g considered the best predictor of student’s academic achievement. Recently, researches have begun to take an alternative view of the problem, understanding it is a multidimensional construct. Multiple intelligences (MI theory proposed by Gardner (1983 takes into account seven talents or skills individuals appear to have in certain amount. These latent bio-psychological potentials are stable and they are mantained through life. Theory of MI proposes that every person learns in relation to them. MI theory has many educational applications, however, very few efforts have been made to verify such statements. The main goal of this study is to analyze the IM differential individual profile of high school and university students studying the relation between IM, academic achievement and self efficacy competence on course performance. Two studies were carried out , the first was done with high school students (N=500 and the second with military students (N=362. Based on Armstrong’s proposals to assess IM, an inventory was designed. Main results point out that there is a correspondence between academic attainment, self interest and self perception of competence in different courses students take. MI are good predictors of academic achievement considering specific areas but they don’t provide a better estimation compared to traditional assessment instruments. Students who have failed in school were those with more spatial and corporal abilities, usually relegated by traditional instruction. High achievers were those with more logical and intrapersonal skills. Different relations were found for military students. For these latter students IM theory was not a valuable predictor of successful academic attainment.

Photon Multiplicity Detector (PMD) measures the multiplicity and spatial distribution of photons in the forward region of ALICE on a event-by-event basis. PMD is a pre-shower detector having fine granularity and full azimuthal coverage in the pseudo-rapidity region 2.3 < η < 3.9.

We prove a first principle of preservation of multiplicity in difference geometry, paving the way for the development of a more general intersection theory. In particular, the fibres of a \\sigma-finite morphism between difference curves are all of the same size, when counted with correct multiplicities.