Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI). Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM) and Maximum A Posteriori Spatial Probability (MASP) are evaluated and compared. The second step applies templatematching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The novelty of the proposed

Many important problems in computer vision can be characterized as template-matching problems on edge images. Some examples are circle detection and line detection. Two techniques for templatematching are the Hough transform and correlation. There are two algorithms for correlation: a shift-and-add-based technique and a Fourier-transform-based technique. The most efficient algorithm of these three varies depending on the size of the template and the structure of the image. On different parallel architectures, the choice of algorithms for a specific problem is different. This paper describes two parallel architectures: the WARP and the Butterfly and describes why and how the criterion for making the choice of algorithms differs between the two machines.

Templatematching is successfully used in machine recognition of isolated spoken words. In these systems a word is broken into frames (20 millisecond time slices) and the spectral characteristics of each frame are found. Thus, each word is represented as a 2-dimensional (2-D) function of spectral characteristic and frame number. An unknown word is recognized by matching its 2-D representation to previously stored example words, or templates, also in this 2-D form. A new model for this matching step will be introduced. The 2-D representations of the template and unknown are used to determine the shape of a volume of viscous fluid. This volume is broken up into many small elements. The unknown is changed into the template by allowing flows between the element boundaries. Finally the match between the template and unknown is determined by calculating a weighted squared sum of the flow values. The model also allows the relative flow resistance between the element boundaries to be changed. This is useful for characterizing the important features of a given template. The flow resistances are changed according to the gradient of a simple performance function. This performance function is evaluated using a set of training samples provided by the user. The model is applied to isolated word and single character recognition tasks. Results indicate the applications where this model works best.

In this paper, we describe a new special purpose VLSI architecture for templatematching, based on a technique known as moment preserving pattern matching (MPPM). This technique first converts the given gray scale image and template into binary form using the moment preserving quantization method and then uses a pairing function to compute the similarity measure. The technique yields accurate results comparable to other approaches but involves simpler computations. The proposed architecture is systolic in nature and achieves a high degree of parallelism and pipelining. It is shown that the proposed architecture is much simpler, achieves higher speed, has a lower hardware complexity and utilizes lesser memory than other special purpose architectures for templatematching.

We describe an approach to detect improvised explosive devices (IEDs) by using a templatematching procedure. This approach relies on the signature due to backstreaming γ photons from various targets. In this work we have simulated cylindrical targets of aluminum, iron, copper, water and ammonium nitrate (nitrogen-rich fertilizer). We simulate 3.5 MeV source photons distributed on a plane inside a shielded area using Monte Carlo N-Particle (MCNP TM) code version 5 (V5). The 3.5 MeV source gamma rays yield 511 keV peaks due to pair production and scattered gamma rays. In this work, we simulate capture of those photons that backstream, after impinging on the target element, toward a NaI detector. The captured backstreamed photons are expected to produce a unique spectrum that will become part of a simple signal processing recognition system based on the templatematching method. Different elements were simulated using different sets of random numbers in the Monte Carlo simulation. To date, the sum of absolute differences (SAD) method has been used to match the template. In the examples investigated, templatematching was found to detect all elements correctly.

This paper explores the application of image processing techniques in recyclable waste paper sorting. In recycling, waste papers are segregated into various grades as they are subjected to different recycling processes. Highly sorted paper streams will facilitate high quality end products, and save processing chemicals and energy. Since 1932 to 2009, different mechanical and optical paper sorting methods have been developed to fill the demand of paper sorting. Still, in many countries including Malaysia, waste papers are sorted into different grades using manual sorting system. Due to inadequate throughput and some major drawbacks of mechanical paper sorting systems, the popularity of optical paper sorting systems is increased. Automated paper sorting systems offer significant advantages over human inspection in terms of fatigue, throughput, speed, and accuracy. This research attempts to develop a smart vision sensing system that able to separate the different grades of paper using TemplateMatching. For constructing template database, the RGB components of the pixel values are used to construct RGBString for template images. Finally, paper object grade is identified based on the maximum occurrence of a specific template image in the search image. The outcomes from the experiment in classification for White Paper, Old Newsprint Paper and Old Corrugated Cardboard are 96%, 92% and 96%, respectively. The remarkable achievement obtained with the method is the accurate identification and dynamic sorting of all grades of papers using simple image processing techniques.

Templatematching algorithms represent a viable tool to locate particles in optical images. A crucial factor of the performance of these methods is the choice of the similarity measure. Recently, it was shown in [Gao and Helgeson, Opt. Express 22 (2014)] that the correlation coefficient (CC) leads to good results. Here, we introduce the mutual information (MI) as a nonlinear similarity measure and compare the performance of the MI and the CC for different noise scenarios. It turns out that the mutual information leads to superior results in the case of signal dependent noise. We propose a novel approach to estimate the velocity of particles which is applicable in imaging scenarios where the particles appear elongated due to their movement. By designing a bank of anisotropic templates supposed to fit the elongation of the particles we are able to reliably estimate their velocity and direction of motion out of a single image. PMID:27137240

We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric templatematching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data. PMID:22641699

Successful scientific applications of large-scale molecular dynamics often rely on automated methods for identifying the local crystalline structure of condensed phases. Many existing methods for structural identification, such as common neighbour analysis, rely on interatomic distances (or thresholds thereof) to classify atomic structure. As a consequence they are sensitive to strain and thermal displacements, and preprocessing such as quenching or temporal averaging of the atomic positions is necessary to provide reliable identifications. We propose a new method, polyhedral templatematching (PTM), which classifies structures according to the topology of the local atomic environment, without any ambiguity in the classification, and with greater reliability than e.g. common neighbour analysis in the presence of thermal fluctuations. We demonstrate that the method can reliably be used to identify structures even in simulations near the melting point, and that it can identify the most common ordered alloy structures as well. In addition, the method makes it easy to identify the local lattice orientation in polycrystalline samples, and to calculate the local strain tensor. An implementation is made available under a Free and Open Source Software license.

In this paper we propose a novel templatematching algorithm for visual inspection of bare printed circuit board (PCB).1 In the conventional templatematching for PCB inspection, the matching score and its relevant offsets are acquired by calculating the maximum value among the convolutions of template image and camera image. While the method is fast, the robustness and accuracy of matching are not guaranteed due to the gap between a design and an implementation resulting from defects and process variations. To resolve this problem, we suggest a new method which uses run-length encoding (RLE). For the template image to be matched, we accumulate data of foreground and background, and RLE data for each row and column in the template image. Using the data, we can find the x and y offsets which minimize the optimization function. The efficiency and robustness of the proposed algorithm are verified through a series of experiments. By comparing the proposed algorithm with the conventional approach, we could realize that the proposed algorithm is not only fast but also more robust and reliable in matching results.

A new approach to signal prediction and prognostic assessment of spacecraft health resolves an inherent difficulty in fusing sensor data with simulated data. This technique builds upon previous work that demonstrated the importance of physics-based transient models to accurate prediction of signal dynamics and system performance. While models can greatly improve predictive accuracy, they are difficult to apply in general because of variations in model type, accuracy, or intended purpose. However, virtually any flight project will have at least some modeling capability at its disposal, whether a full-blown simulation, partial physics models, dynamic look-up tables, a brassboard analogue system, or simple hand-driven calculation by a team of experts. Many models can be used to develop a predict, or an estimate of the next day s or next cycle s behavior, which is typically used for planning purposes. The fidelity of a predict varies from one project to another, depending on the complexity of the simulation (i.e. linearized or full differential equations) and the level of detail in anticipated system operation, but typically any predict cannot be adapted to changing conditions or adjusted spacecraft command execution. Applying a predict blindly, without adapting the predict to current conditions, produces mixed results at best, primarily due to mismatches between assumed execution of spacecraft activities and actual times of execution. This results in the predict becoming useless during periods of complicated behavior, exactly when the predict would be most valuable. Each spacecraft operation tends to show up as a transient in the data, and if the transients are misaligned, using the predict can actually harm forecasting performance. To address this problem, the approach here expresses the predict in terms of a baseline function superposed with one or more transient functions. These transients serve as signal templates, which can be relocated in time and space against

Objective Develop an improved method for auditing hospital cost and quality. Data Sources/Setting Medicare claims in general, gynecologic and urologic surgery, and orthopedics from Illinois, Texas, and New York between 2004 and 2006. Study Design A template of 300 representative patients was constructed and then used to match 300 patients at hospitals that had a minimum of 500 patients over a 3-year study period. Data Collection/Extraction Methods From each of 217 hospitals we chose 300 patients most resembling the template using multivariate matching. Principal Findings The matching algorithm found close matches on procedures and patient characteristics, far more balanced than measured covariates would be in a randomized clinical trial. These matched samples displayed little to no differences across hospitals in common patient characteristics yet found large and statistically significant hospital variation in mortality, complications, failure-to-rescue, readmissions, length of stay, ICU days, cost, and surgical procedure length. Similar patients at different hospitals had substantially different outcomes. Conclusion The template-matched sample can produce fair, directly standardized audits that evaluate hospitals on patients with similar characteristics, thereby making benchmarking more believable. Through examining matched samples of individual patients, administrators can better detect poor performance at their hospitals and better understand why these problems are occurring. PMID:24588413

We herein examined the ability of a templatematching algorithm to recognize particles with diameters ranging from 1 to 20 µm in a microfluidic channel. The algorithm consisted of measurements of the distance between the templates and the images captured with a high-speed camera in order to search for the presence of the desired particle. The results obtained indicated that the effects of blur and diffraction rings observed around the particle are important phenomena that limit the recognition of a target. Owing to the effects of diffraction rings, the distance between a template and an image is not exclusively linked to the position of the focus plane; it is also linked to the size of the particle being searched for. By using a set of three templates captured at different Z focuses and an 800× magnification, the templatematching algorithm has the ability to recognize beads ranging in diameter from 1.7 to 20 µm with a resolution between 0.3 and 1 µm.

Reliable quantitative analysis of white matter connectivity in the brain is an open problem in neuroimaging, with common solutions requiring tools for fiber tracking, tractography segmentation and estimation of intersubject correspondence. This paper proposes a novel, templatematching approach to the problem. In the proposed method, a deformable fiber-bundle model is aligned directly with the subject tensor field, skipping the fiber tracking step. Furthermore, the use of a common template eliminates the need for tractography segmentation and defines intersubject shape correspondence. The method is validated using phantom DTI data and applications are presented, including automatic fiber-bundle reconstruction and tract-based morphometry. PMID:19457360

We propose a quasi real-time method for discrimination of ventricular ectopic beats from both supraventricular and paced beats in the electrocardiogram (ECG). The heartbeat waveforms were evaluated within a fixed-length window around the fiducial points (100 ms before, 450 ms after). Our algorithm was designed to operate with minimal expert intervention and we define that the operator is required only to initially select up to three 'normal' heartbeats (the most frequently seen supraventricular or paced complexes). These were named original QRS templates and their copies were substituted continuously throughout the ECG analysis to capture slight variations in the heartbeat waveforms of the patient's sustained rhythm. The method is based on matching of the evaluated heartbeat with the QRS templates by a complex set of ECG descriptors, including maximal cross-correlation, area difference and frequency spectrum difference. Temporal features were added by analyzing the R-R intervals. The classification criteria were trained by statistical assessment of the ECG descriptors calculated for all heartbeats in MIT-BIH Supraventricular Arrhythmia Database. The performance of the classifiers was tested on the independent MIT-BIH Arrhythmia Database. The achieved unbiased accuracy is represented by sensitivity of 98.4% and specificity of 98.86%, both being competitive to other published studies. The provided computationally efficient techniques enable the fast post-recording analysis of lengthy Holter-monitor ECG recordings, as well as they can serve as a quasi real-time detection method embedded into surface ECG monitors. PMID:17805974

Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, templatematching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal templatematching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art templatematching procedures. Finally, we explore the possibility to resolve overlapping spikes using the templatematching outputs and show that they can be resolved with high accuracy. PMID:25652689

Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyrighted material. This paper describes a method for the secure and robust copyright protection of digital images. We present an approach for embedding a digital watermark into an image using the Fourier transform. To this watermark is added a template in the Fourier transform domain to render the method robust against general linear transformations. We detail a new algorithm based on polar maps for the accurate and efficient recovery of the template in an image which has undergone a general affine transformation. We also present results which demonstrate the robustness of the method against some common image processing operations such as compression, rotation, scaling, and aspect ratio changes. PMID:18255481

This paper proposes a new approach to localize buildings from forward looking infrared (FLIR) images. The proposed approach can localize not only large buildings, but also small buildings. Furthermore, the proposed approach is also robust with those FLIR images degraded by clouds. This breakthrough is due to the following improvements: (1) the Histogram of Oriented Gradients approach is improved to match FLIR images with our templates; (2) a new kind of feature image is presented to reduce the difference between template and target; (3) we project 3D building models into images, with different colors on different sides, distinguishing those sides apart; (4) we generate templates which contain all buildings in the visual field. As a result, the FLIR images can be matched with the big templates at a high correct rate, and then target buildings can be localized. The experimental results show the superior performance of the proposed approach.

In this paper, a fast object tracking algorithm based on templatematching and region information fusion extraction is proposed. In the prediction framework, the data connection task is achieved by object template and object information extraction. And then the object is tracked accurately by using the object motion information. We handle the tracking shift by using the confidence estimation strategy. The experiments show that the proposed algorithm has robust performance.

The so-called Tethered Space Robot (TSR) is a novel active space debris removal system. To solve its problem of non-cooperative target recognition during short-distance rendezvous events, this paper presents a framework for a real-time visual servoing system using non-calibrated monocular-CMOS (Complementary Metal Oxide Semiconductor). When a small template is used for matching with a large scene, it always leads to mismatches, so a novel templatematching algorithm to solve the problem is presented. Firstly, the novel matching algorithm uses a hollow annulus structure according to a FAST (Features from Accelerated Segment) algorithm and makes the method be rotation-invariant. Furthermore, the accumulative deviation can be decreased by the hollow structure. The matching function is composed of grey and gradient differences between template and object image, which help it reduce the effects of illumination and noises. Then, a dynamic template update strategy is designed to avoid tracking failures brought about by wrong matching or occlusion. Finally, the system synthesizes the least square integrated predictor, realizing tracking online in complex circumstances. The results of ground experiments show that the proposed algorithm can decrease the need for sophisticated computation and improves matching accuracy. PMID:26703609

The so-called Tethered Space Robot (TSR) is a novel active space debris removal system. To solve its problem of non-cooperative target recognition during short-distance rendezvous events, this paper presents a framework for a real-time visual servoing system using non-calibrated monocular-CMOS (Complementary Metal Oxide Semiconductor). When a small template is used for matching with a large scene, it always leads to mismatches, so a novel templatematching algorithm to solve the problem is presented. Firstly, the novel matching algorithm uses a hollow annulus structure according to a FAST (Features from Accelerated Segment) algorithm and makes the method be rotation-invariant. Furthermore, the accumulative deviation can be decreased by the hollow structure. The matching function is composed of grey and gradient differences between template and object image, which help it reduce the effects of illumination and noises. Then, a dynamic template update strategy is designed to avoid tracking failures brought about by wrong matching or occlusion. Finally, the system synthesizes the least square integrated predictor, realizing tracking online in complex circumstances. The results of ground experiments show that the proposed algorithm can decrease the need for sophisticated computation and improves matching accuracy. PMID:26703609

Templatematching for image sequences captured with a moving camera is very important for several applications such as Robot Vision, SLAM, ITS, and video surveillance systems. However, it is difficult to realize accurate templatematching using only visual feature information such as HSV histograms, edge histograms, HOG histograms, and SIFT features, because it is affected by several phenomena such as illumination change, viewpoint change, size change, and noise. In order to realize robust tracking, structure information such as the relative position of each part of the object should be considered. In this paper, we propose a method that considers both visual feature information and structure information. Experiments show that the proposed method realizes robust tracking and determine the relationships between object parts in the scenes and those in the template.

Peaky templatematching (PTM) is a special case of a general algorithm known as multinomial pattern matching originally developed for automatic target recognition of synthetic aperture radar data. The algorithm is a model- based approach that first quantizes pixel values into Nq = 2 discrete values yielding generative Beta-Bernoulli models as class-conditional templates. Here, we consider the case of classification of target chips in AWGN and develop approximations to image-to-template classification performance as a function of the noise power. We focus specifically on the case of a uniform quantization" scheme, where a fixed number of the largest pixels are quantized high as opposed to using a fixed threshold. This quantization method reduces sensitivity to the scaling of pixel intensities and quantization in general reduces sensitivity to various nuisance parameters difficult to account for a priori. Our performance expressions are verified using forward-looking infrared imagery from the Army Research Laboratory Comanche dataset.

The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF templatematching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective templatematching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF templatematching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

A fast and accurate TV Logo detection method is presented based on real-time image filtering, noise eliminating and recognition of image features including edge and gray level information. It is important to accurately extract the optical template using the time averaging method from the sample video stream, and then different templates are used to match different logos in separated video streams with different resolution based on the topology features of logos. 12 video streams with different logos are used to verify the proposed method, and the experimental result demonstrates that the achieved accuracy can be up to 99%.

The process of vehicle examination by using satellite images is complicated and cumbersome process. At the present, the high definition satellite images are being used, however, the images of the vehicles can be seen as just a small point which is difficult to separate it out from the background that the image details are not sufficient to identify small objects. In this research, the techniques for the process of vehicle examination by using satellite images were applied by using image data from Pléiades which is the satellite image with high resolution of 0.40 m. The objective of this research is to study and develop the device for data extracting from satellite images, and the received data would be organized and created as Geospatial information by the concept of the picture matching with a pattern matching or TemplateMatching developed with Matlab program and Sum of Absolute Difference method collaborated with Neural Network technique in order to help evaluating pattern matching between template images of cars and cars' images which were used to examine from satellite images. The result obtained from the comparison with template data shows that data extraction accuracy is greater than 90%, and the extracted data can be imported into Geospatial information database. Moreover, the data can be displayed in Geospatial information Software, and it also can be searched by quantity condition and satellite image position.

Dead wood is an important habitat characteristic in forests. However, dead wood lying on the ground below a canopy is difficult to detect from remotely sensed data. Data from airborne laser scanning include measurement of surfaces below the canopy, thus offering the potential to model objects on the ground. This paper describes a new line templatematching algorithm for detecting lines along the ground. The line templatematching is done directly to the laser point cloud and results in a raster showing the support of the line in each raster cell. Line elements are vectorized based on the raster to represent lying tree stems. The results have been validated versus field-measured lying tree stems. The number of detected lines was 845, of which 268 could be automatically linked to the 651 field-measured stems. The line templatematching produced a raster which visually showed linear elements in areas where lying tree stems where present, but the result is difficult to compare with the field measurements due to positioning errors. The study area contained big piles of storm-felled trees in some places, which made it an unusually complex test site. Longer line structures such as ditches and roads also resulted in detected lines and further analysis is needed to avoid this, for example by specifically detecting longer lines and removing them.

We propose a shape-based, hierarchical part-templatematching approach to simultaneous human detection and segmentation combining local part-based and global shape-template-based schemes. The approach relies on the key idea of matching a part-template tree to images hierarchically to detect humans and estimate their poses. For learning a generic human detector, a pose-adaptive feature computation scheme is developed based on a tree matching approach. Instead of traditional concatenation-style image location-based feature encoding, we extract features adaptively in the context of human poses and train a kernel-SVM classifier to separate human/nonhuman patterns. Specifically, the features are collected in the local context of poses by tracing around the estimated shape boundaries. We also introduce an approach to multiple occluded human detection and segmentation based on an iterative occlusion compensation scheme. The output of our learned generic human detector can be used as an initial set of human hypotheses for the iterative optimization. We evaluate our approaches on three public pedestrian data sets (INRIA, MIT-CBCL, and USC-B) and two crowded sequences from Caviar Benchmark and Munich Airport data sets. PMID:20224118

This study investigated the utility of multistation waveform cross correlation to help discern induced seismicity. Templatematching was applied to all Ohio earthquakes cataloged since the arrival of nearby EarthScope TA stations in late 2010. Earthquakes that were within 5 km of fluid injection activities in regions that lacked previously documented seismicity were found to be swarmy. Moreover, the larger number of events produced by templatematching for these swarmy sequences made it easier to establish more detailed temporal and spatial relationships between the seismicity and fluid injection activities, which is typically required for an earthquake to be considered induced. Study results detected three previously documented induced sequences (Youngstown, Poland Township, and Harrison County) and provided evidence that suggests two additional cases of induced seismicity (Belmont/Guernsey County and Washington County). Evidence for these cases suggested that unusual swarm-like behaviors in regions that lack previously documented seismicity can be used to help distinguish induced seismicity, complementing the traditional identification of an anthropogenic source spatially and temporally correlated with the seismicity. In support of this finding, we identified 17 additional cataloged earthquakes in regions of previously documented seismicity and away from disposal wells or hydraulic fracturing that returned very few templatematches. The lack of swarminess helps to indicate that these events are most likely naturally occurring.

We estimate the number of templates, computational power, and storage required for a one-step matched filtering search for gravitational waves from inspiraling compact binaries. Our estimates for the one-step search strategy should serve as benchmarks for the evaluation of more sophisticated strategies such as hierarchical searches. We use a discrete family of two-parameter wave form templates based on the second post-Newtonian approximation for binaries composed of nonspinning compact bodies in circular orbits. We present estimates for all of the large- and mid-scale interferometers now under construction: LIGO (three configurations), VIRGO, GEO600, and TAMA. To search for binaries with components more massive than mmin=0.2Msolar while losing no more than 10% of events due to coarseness of template spacing, the initial LIGO interferometers will require about 1.0×1011 flops (floating point operations per second) for data analysis to keep up with data acquisition. This is several times higher than estimated in previous work by Owen, in part because of the improved family of templates and in part because we use more realistic (higher) sampling rates. Enhanced LIGO, GEO600, and TAMA will require computational power similar to initial LIGO. Advanced LIGO will require 7.8×1011 flops, and VIRGO will require 4.8×1012 flops to take full advantage of its broad target noise spectrum. If the templates are stored rather than generated as needed, storage requirements range from 1.5×1011 real numbers for TAMA to 6.2×1014 for VIRGO. The computational power required scales roughly as m-8/3min and the storage as m-13/3min. Since these scalings are perturbed by the curvature of the parameter space at second post-Newtonian order, we also provide estimates for a search with mmin=1Msolar. Finally, we sketch and discuss an algorithm for placing the templates in the parameter space.

Best-so-far ABC is a modified version of the artificial bee colony (ABC) algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI) algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on templatematching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution. PMID:24812556

Best-so-far ABC is a modified version of the artificial bee colony (ABC) algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI) algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on templatematching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution. PMID:24812556

Ground-penetrating radar (GPR) is a mature geophysical technique that is used to map utility pipelines buried within 1.5 m of the ground surface in the urban landscape. In this work, the template-matching algorithm has been originally applied to the detection and localization of pipe signatures in two perpendicular antenna polarizations. The processing of a GPR radargram is based on four main steps. The first step consists in defining a template, usually from finite-difference time-domain simulations, made of the nearby area of the hyperbola apex associated with the mean size object to be detected in the soil, whose mean permittivity has been previously experimentally estimated. In the second step, the raw radargram is pre-processed to correct variations due to antenna coupling, then the templatematching algorithm is used to detect and localize individual hyperbola signatures in an environment containing unwanted reflections, noise and overlapping signatures. The distance between the shifted template and a local zone in the radargram, based on the L1 norm, allows us to obtain a map of distances. A user-defined threshold allows us to select a reduced number of zones having a high similarity measure. In the third step, minimum or maximum discrete amplitudes belonging to a selected hyperbola curve are semi-automatically extracted in each zone. In the fourth step, the discrete hyperbola data (i, j) are fitted by a parametric hyperbola model using a non-linear least squares criterion. The algorithm was implemented and evaluated on numerical radargrams, and afterwards on experimental radargrams.

The large sky area multi-object fiber spectroscopic telescope (LAMOST) is an innovative reflecting schmidt telescope, promising a very high spectrum acquiring rate of several ten-thousands of spectra per night. By using the parallel controllable fiber positioning technique, LAMOST makes reconfiguration of fibers accurately according to the positions of objects in minutes and fine adjusting the fibers. As a key problem, High precision positioning detection of LAMOST fiber positioning unit has always been highly regarded and some detection schemes have been proposed. Among these, active detection method, which determines the final accurate position of optical fiber end with the help of lighting the fiber, has been most widely researched, but this kind of method could not be applied in LAMOST real-time observation because it needs projecting light into fiber. A novel detection idea exploiting the technique of templatematching is presented in this paper. As we know, final position of a specific fiber end can be easily inferred by its corresponding revolving angles of the central revolving axle and bias revolving axle in double revolving style, so the key point in this problem is converted to the accurate determination of these revolving angles. Templatematching technique are explored to acquire the matching parameters for its real-time collected imagery, and thus determine the corresponding revolving angle of the central revolving axle and bias revolving axle respectively. Experiments results obtained with data acquired from LAMOST site are used to verify the feasibility and effectiveness of this novel method.

In computer vision applications, image matching performed on quality-degraded imagery is difficult due to image content distortion and noise effects. State-of-the art keypoint based matchers, such as SURF and SIFT, work very well on clean imagery. However, performance can degrade significantly in the presence of high noise and clutter levels. Noise and clutter cause the formation of false features which can degrade recognition performance. To address this problem, previously we developed an extension to the classical amplitude and phase correlation forms, which provides improved robustness and tolerance to image geometric misalignments and noise. This extension, called Alpha-Rooted Phase Correlation (ARPC), combines Fourier domain-based alpha-rooting enhancement with classical phase correlation. ARPC provides tunable parameters to control the alpha-rooting enhancement. These parameter values can be optimized to tradeoff between high narrow correlation peaks, and more robust wider, but smaller peaks. Previously, we applied ARPC in the radon transform domain for logo image recognition in the presence of rotational image misalignments. In this paper, we extend ARPC to incorporate quaternion Fourier transforms, thereby creating Alpha-Rooted Quaternion Phase Correlation (ARQPC). We apply ARQPC to the logo image recognition problem. We use ARQPC to perform multiple-reference logo templatematching by representing multiple same-class reference templates as quaternion-valued images. We generate recognition performance results on publicly-available logo imagery, and compare recognition results to results generated from standard approaches. We show that small deviations in reference templates of sameclass logos can lead to improved recognition performance using the joint matching inherent in ARQPC.

Purpose: Recently, templatematching has been shown to be able to track tumor motion on cine-MRI images. However, artifacts such as deformation, rotation, and/or out-of-plane movement could seriously degrade the performance of this technique. In this work, we demonstrate the utility of multiple templates derived from different phases of tumor motion in reducing the negative effects of artifacts and improving the accuracy of templatematching methods. Methods: Data from 2 patients with large tumors and significant tumor deformation were analyzed from a group of 12 patients from an earlier study. Cine-MRI (200 frames) imaging was performed while the patients were instructed to breathe normally. Ground truth tumor position was established on each frame manually by a radiation oncologist. Tumor positions were also automatically determined using templatematching with either single or multiple (5) templates. The tracking errors, defined as the absolute differences in tumor positions determined by the manual and automated methods, when using either single or multiple templates were compared in both the AP and SI directions, respectively. Results: Using multiple templates reduced the tracking error of templatematching. In the SI direction where the tumor movement and deformation were significant, the mean tracking error decreased from 1.94 mm to 0.91 mm (Patient 1) and from 6.61 mm to 2.06 mm (Patient 2). In the AP direction where the tumor movement was small, the reduction of the mean tracking error was significant in Patient 1 (from 3.36 mm to 1.04 mm), but not in Patient 2 ( from 3.86 mm to 3.80 mm). Conclusion: This study shows the effectiveness of using multiple templates in improving the performance of templatematching when artifacts like large tumor deformation or out-of-plane motion exists. Accurate tumor tracking capabilities can be integrated with MRI guided radiation therapy systems. This work was supported in part by grants from NIH/NCI CA 124766 and Varian

Although accurate and continuous assessment of cerebral vasculature status is highly desirable for managing cerebral vascular diseases, no such method exists for current clinical practice. The present work introduces a novel method for real-time detection of cerebral vasodilatation and vasoconstriction using pulse morphological templatematching. Templates consisting of morphological metrics of cerebral blood flow velocity (CBFV) pulse, measured at middle cerebral artery using Transcranial Doppler, are obtained by applying a morphological clustering and analysis of intracranial pulse algorithm to the data collected during induced vasodilatation and vasoconstriction in a controlled setting. These templates were then employed to define a vasodilatation index (VDI) and a vasoconstriction index (VCI) for any inquiry data segment as the percentage of the metrics demonstrating a trend consistent with those obtained from the training dataset. The validation of the proposed method on a dataset of CBFV signals of 27 healthy subjects, collected with a similar protocol as that of training dataset, during hyperventilation (and CO₂ rebreathing tests) shows a sensitivity of 92% (and 82%) for detection of vasodilatation (and vasoconstriction) and the specificity of 90% (and 92%), respectively. Moreover, the proposed method of detection of vasodilatation (vasoconstriction) is capable of rejecting all the cases associated with vasoconstriction (vasodilatation) and outperforms other two conventional techniques by at least 7% for vasodilatation and 19% for vasoconstriction. PMID:23226385

Although accurate and continuous assessment of cerebral vasculature status is highly desirable for managing cerebral vascular diseases, no such method exists for current clinical practice. The present work introduces a novel method for real-time detection of cerebral vasodilatation and vasoconstriction using pulse morphological templatematching. Templates consisting of morphological metrics of cerebral blood flow velocity (CBFV) pulse, measured at middle cerebral artery using Transcranial Doppler, are obtained by applying a morphological clustering and analysis of intracranial pulse algorithm to the data collected during induced vasodilatation and vasoconstriction in a controlled setting. These templates were then employed to define a vasodilatation index (VDI) and a vasoconstriction index (VCI) for any inquiry data segment as the percentage of the metrics demonstrating a trend consistent with those obtained from the training dataset. The validation of the proposed method on a dataset of CBFV signals of 27 healthy subjects, collected with a similar protocol as that of training dataset, during hyperventilation (and CO2 rebreathing tests) shows a sensitivity of 92% (and 82%) for detection of vasodilatation (and vasoconstriction) and the specificity of 90% (and 92%), respectively. Moreover, the proposed method of detection of vasodilatation (vasoconstriction) is capable of rejecting all the cases associated with vasoconstriction (vasodilatation) and outperforms other two conventional techniques by at least 7% for vasodilatation and 19% for vasoconstriction. PMID:23226385

In civil engineering applications, ground-penetrating radar (GPR) is one of the main non destructive technique based on the refraction and reflection of electromagnetic waves to probe the underground and particularly detect damages (cracks, delaminations, texture changes…) and buried objects (utilities, rebars…). An UWB ground-coupled radar operating in the frequency band [0.46;4] GHz and made of bowtie slot antennas has been used because, comparing to a air-launched radar, it increases energy transfer of electromagnetic radiation in the sub-surface and penetration depth. This paper proposes an original adaptation of the generic templatematching algorithm to GPR images to recognize, localize and characterize with parameters a specific pattern associated with a hyperbola signature in the two main polarizations. The processing of a radargram (Bscan) is based on four main steps. The first step consists in pre-processing and scaling. The second step uses templatematching to isolate and localize individual hyperbola signatures in an environment containing unwanted reflections, noise and overlapping signatures. The algorithm supposes to generate and collect a set of reference hyperbola templates made of a small reflection pattern in the vicinity of the apex in order to further analyze multiple time signals of embedded targets in an image. The standard Euclidian distance between the template shifted and a local zone in the radargram allows to obtain a map of distances. A user-defined threshold allows to select a reduced number of zones having a high similarity measure. In a third step, each zone is analyzed to detect minimum or maximum discrete amplitudes belonging to the first arrival times of a hyperbola signature. In the fourth step, the extracted discrete data (i,j) are fitted by a parametric hyperbola modeling based on the straight ray path hypothesis and using a constraint least square criterion associated with parameter ranges, that are the position, the

Templatematching is one of the oldest techniques in computer vision. It has been applied in a variety of different applications using cross correlation as distance measurement or derivates of it. But so far, the success of object tracking is very limited despite the promising structural similarity search that is done thereby. Based on an analysis of the underlying reasons, a new kind of measurement is proposed therefore to open up far more of the potential the structural search inherently offers. This new measurement does not sum up differences in color space like the cross correlation but outputs the number of matching pixels in percent. As a key feature, local color variations are considered in order to properly handle the different character of homogeneous and highly structured regions and to model the relations between them. Furthermore, relevant differences between templates are expatiated and stressed while irrelevant contributions to the measurement function are widely suppressed in order to avoid unnecessary distortions on the measurement and, therefore, on the search decision. The presented results document the advantages in comparison to the measurements known from the literature. Different objects and persons in LWIR and VIS image sequences are tracked to illustrate the performance and the benefit in a broad field of applications.

Carbonate sands are composed of relatively few particle types (e.g., halimeda, coralline algae, corals, mollusks, and foraminifera). The shape of a particular sand grain is highly dependent on the particle type of which it is composed. Previous, studies of modern carbonate environments show that the composition of sand substrates from different subenvironments are dependent on the organisms that inhabit them. These depositional environments can thus be distinguished from each other according to their constituent particle compositions and, therefore, also by analysis of particle shapes. Template (shape) matching can be accomplished only after the digitized shapes have been normalized to a unit-sized circle and registered. Registration involves the simple computation of shape-specific points within, on, or near the 2-dimensional contour of the sand grain. Shapes are subsequently rotated so that all of the shapes are in a similar position relative to their shape-specific points, allowing more meaningful comparisons between particles. After registration, 36 equi-angular radial lengths are calculated for grain from the center of mass to the boundary outline. A template-matching algorithm was devised in order to determine the relative percentages of several reference shape types, representing the constituents contained within 35 samples from 4 carbonate beaches and associated subtidal environments from the Florida Keys. Reference shapes may be chosen arbitrarily or obtained by computing average shapes of the various constituents. The precision of the shape classifications may be enhanced by adding supplemental reference shapes to the algorithm.

Functional magnetic resonance imaging in resting state (fMRI-RS) constitutes an informative protocol to investigate several pathological and pharmacological conditions. A common approach to study this data source is through the analysis of changes in the so called resting state networks (RSNs). These networks correspond to well-defined functional entities that have been associated to different low and high brain order functions. RSNs may be characterized by using Independent Component Analysis (ICA). ICA provides a decomposition of the fMRI-RS signal into sources of brain activity, but it lacks of information about the nature of the signal, i.e., if the source is artifactual or not. Recently, a multiple template-matching (MTM) approach was proposed to automatically recognize RSNs in a set of Independent Components (ICs). This method provides valuable information to assess subjects at individual level. Nevertheless, it lacks of a mechanism to quantify how much certainty there is about the existence/absence of each network. This information may be important for the assessment of patients with severely damaged brains, in which RSNs may be greatly affected as a result of the pathological condition. In this work we propose a set of changes to the original MTM that improves the RSNs recognition task and also extends the functionality of the method. The key points of this improvement is a standardization strategy and a modification of method's constraints that adds flexibility to the approach. Additionally, we also introduce an analysis to the trustworthiness measurement of each RSN obtained by using template-matching approach. This analysis consists of a thresholding strategy applied over the computed Goodness-of-Fit (GOF) between the set of templates and the ICs. The proposed method was validated on 2 two independent studies (Baltimore, 23 healthy subjects and Liege, 27 healthy subjects) with different configurations of MTM. Results suggest that the method will provide

An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive templatematched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.

In this paper we introduce a novel algorithm for automatic fault detection in textures. We study the problem of finding a defect in regularly textured images with an approach based on a templatematching principle. We aim at registering patches of an input image in a defect-free reference sample according to some admissible transformations. This approach becomes feasible by introducing the so-called discrepancy norm as fitness function which shows particular behavior like a monotonicity and a Lipschitz property. The proposed approach relies only on few parameters which makes it an easily adaptable algorithm for industrial applications and, above all, it avoids complex tuning of configuration parameters. Experiments demonstrate the feasibility and the reliability of the proposed algorithms with textures from real-world applications in the context of quality inspection of woven textiles.

It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be

Purpose: Accurate determination of tumor position is crucial for successful application of motion compensated radiotherapy in lung cancer patients. This study tested the performance of an automated templatematching algorithm in tracking the tumor position on cine-MR images by examining the tracking error and further comparing the tracking error to the interoperator variability of three human reviewers. Methods: Cine-MR images of 12 lung cancer patients were analyzed. Tumor positions were determined both automatically with templatematching and manually by a radiation oncologist and two additional reviewers trained by the radiation oncologist. Performance of the automated templatematching was compared against the ground truth established by the radiation oncologist. Additionally, the tracking error of templatematching, defined as the difference in the tumor positions determined with templatematching and the ground truth, was investigated and compared to the interoperator variability for all patients in the anterior-posterior (AP) and superior-inferior (SI) directions, respectively. Results: The median tracking error for ten out of the 12 patients studied in both the AP and SI directions was less than 1 pixel (= 1.95 mm). Furthermore, the median tracking error for seven patients in the AP direction and nine patients in the SI direction was less than half a pixel (= 0.975 mm). The median tracking error was positively correlated with the tumor motion magnitude in both the AP (R = 0.55, p = 0.06) and SI (R = 0.67, p = 0.02) directions. Also, a strong correlation was observed between tracking error and interoperator variability (y = 0.26 + 1.25x, R = 0.84, p < 0.001) with the latter larger. Conclusions: Results from this study indicate that the performance of templatematching is comparable with or better than that of manual tumor localization. This study serves as preliminary investigations towards developing online motion tracking techniques for hybrid MRI

Detection of interictal discharges is a key element of interpreting EEGs during the diagnosis and management of epilepsy. Because interpretation of clinical EEG data is time-intensive and reliant on experts who are in short supply, there is a great need for automated spike detectors. However, attempts to develop general-purpose spike detectors have so far been severely limited by a lack of expert-annotated data. Huge databases of interictal discharges are therefore in great demand for the development of general-purpose detectors. Detailed manual annotation of interictal discharges is time consuming, which severely limits the willingness of experts to participate. To address such problems, a graphical user interface "SpikeGUI" was developed in our work for the purposes of EEG viewing and rapid interictal discharge annotation. "SpikeGUI" substantially speeds up the task of annotating interictal discharges using a custom-built algorithm based on a combination of templatematching and online machine learning techniques. While the algorithm is currently tailored to annotation of interictal epileptiform discharges, it can easily be generalized to other waveforms and signal types. PMID:25570976

Target tracking in forward-looking infrared (FLIR) video sequences is a challenging problem because of various limitations such as low signal-to-noise ratio (SNR), image blurring, partial occlusion, and low texture information, which often leads to missing targets or tracking nontarget objects. To alleviate these problems, we developed a novel algorithm that involves local-deviation-based image preprocessing as well as fringe-adjusted joint-transform-correlation--(FJTC) and template-matching--(TM) based target detection and tracking. The local-deviation-based preprocessing technique is used to suppress smooth texture such as background and to enhance target edge information. However, for complex situations such as the target blending with background, partial occlusion of the target, or proximity of the target to other similar nontarget objects, FJTC may produce a false alarm. For such cases, the TM-based detection technique is used to compensate FJTC breaking points by use of cross-correlation coefficients. Finally, a robust tracking algorithm is developed by use of both FJTC and TM techniques, which is called FJTC-TM technique. The performance of the proposed FJTC-TM algorithm is tested with real-life FLIR image sequences. PMID:15449474

Theories of visual search postulate that the selection of targets amongst distractors involves matching visual input to a top-down attentional template. Previous work has provided evidence that feature-based attentional templates affect visual processing globally across the visual field. In the present study, we asked whether more naturalistic, category-level attentional templates also modulate visual processing in a spatially global and obligatory way. Subjects were cued to detect people or cars in a diverse set of photographs of real-world scenes. On a subset of trials, silhouettes of people and cars appeared in search-irrelevant locations that subjects were instructed to ignore, and subjects were required to respond to the location of a subsequent dot probe. In three experiments, results showed a consistency effect on dot-probe trials: dot probes were detected faster when they appeared in the location of the cued category compared with the non-cued category, indicating attentional capture by template-matching stimuli. Experiments 1 and 2 showed that this capture was involuntary: consistency effects persisted under conditions in which attending to silhouettes of the cued category was detrimental to performance. Experiment 3 tested whether these effects could be attributed to non-attentional effects related to the processing of the category cues. Results showed a consistency effect when subjects searched for category exemplars but not when they searched for objects semantically related to the cued category. Together, these results indicate that attentional templates for familiar object categories affect visual processing across the visual field, leading to involuntary attentional capture by template-matching stimuli. PMID:25810159

After the 1999 Izmit earthquake, the Main Marmara Fault (MMF) represents a 150 km unruptured segment of the North Anatolian Fault located below the Marmara Sea. One of the principal issue for seismic hazard assessment in the region is to know if the MMF is totally or partially locked and where the nucleation of the major forthcoming event is going to take place. The area is actually one of the best-instrumented fault systems in Europe. Since year 2007, various seismic networks both broadband, short period and OBS stations were deployed in order to monitor continuously the seismicity along the MMF and the related fault systems. A recent analysis of the seismicity recorded during the 2007-2012 period has provided new insights on the recent evolution of this important regional seismic gap. This analysis was based on events detected with STA/LTA procedure and manually picked P and S wave arrivals times (Schmittbuhl et al., 2015). In order to extend the level of details and to fully take advantage of the dense seismic network we improved the seismic catalog using an automatic earthquake detection technique based on a templatematching approach. This approach uses known earthquake seismic signals in order to detect newer events similar to the tested one from waveform cross-correlation. To set-up the methodology and verify the accuracy and the robustness of the results, we initially focused in the eastern part of the Marmara Sea (Cinarcik basin) and compared new detection with those manually identified. Through the massive analysis of cross-correlation based on the template scanning of the continuous recordings, we construct a refined catalog of earthquakes for the Marmara Sea in 2007-2014 period. Our improved earthquake catalog will provide an effective tool to improve the catalog completeness, to monitor and study the fine details of the time-space distribution of events, to characterize the repeating earthquake source processes and to understand the mechanical state of

The automated processing of retinal images is a widely researched area in medical image analysis. Screening systems based on the automated and accurate recognition of retinopathies enable the earlier diagnosis of diseases like diabetic retinopathy, hypertension and their complications. The segmentation of the vascular system is a crucial task in the field: on the one hand, the accurate extraction of the vessel pixels aids the detection of other anatomical parts (like the optic disc Hoover and Goldbaum, 2003) and lesions (like microaneurysms Sopharak et al., 2013); on the other hand, the geometrical features of the vascular system and their temporal changes are shown to be related to diseases, like the vessel tortuosity to Fabry disease Sodi et al., 2013 and the arteriolar-to-venus (A/V) ratio to hypertension (Pakter et al., 2005). In this study, a novel technique based on templatematching and contour reconstruction is proposed for the segmentation of the vasculature. In the templatematching step generalized Gabor function based templates are used to extract the center lines of vessels. Then, the intensity characteristics of vessel contours measured in training databases are reconstructed. The method was trained and tested on two publicly available databases, DRIVE and STARE; and reached an average accuracy of 0.9494 and 0.9610, respectively. We have also carried out cross-database tests and found that the accuracy scores are higher than that of any previous technique trained and tested on the same database. PMID:26766207

This paper presents a customized three-dimensional templatematching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

This paper presents a customized three-dimensional templatematching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

We analyze the frequency-magnitude distribution (FMD) of recent seismic sequences thought to be induced by wastewater injection and hydraulic fracturing in the Central and Eastern U.S. to investigate their physical origin and improve hazard estimates. Multistation templatematching is utilized to increase the number of events analyzed by lowering the magnitude of detection. In cases where local deployments are available, we demonstrate that the FMD obtained through templatematching using regional data are comparable to those obtained from traditional detection using the local deployment. Since deployments usually occur after seismicity has already been identified, catalogs constructed with regional data offer the advantage of providing a more complete history of the seismicity. We find two primary groups of FMDs for induced sequences: those that generally follow the Gutenberg-Richter power-law and those that generally do not. All of the induced sequences are typically characterized by swarm-like behavior, but the non-power-law FMDs are also characterized by a clustering of events at low magnitudes and particularly low aftershock productivity for a continental interior. Each of the observations in the non-power law FMD cases is predicted by numerical simulations of a seismogenic zone governed by a viscoelastic damage rheology with low effective viscosity in the fault zone. Such a reduction in effective viscosity is expected if fluid injection increases fluid pressures in the fault zone to the point that the fault zone begins to dilate.

We describe a new supervised learning-based templatematching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787

Development of a computational decision aid for a new medical imaging modality typically is a long and complicated process. It consists of collecting data in the form of images and annotations, development of image processing and pattern recognition algorithms for analysis of the new images and finally testing of the resulting system. Since new imaging modalities are developed more rapidly than ever before, any effort for decreasing the time and cost of this development process could result in maximizing the benefit of the new imaging modality to patients by making the computer aids quickly available to radiologists that interpret the images. In this paper, we make a step in this direction and investigate the possibility of translating the knowledge about the detection problem from one imaging modality to another. Specifically, we present a computer-aided detection (CAD) system for mammographic masses that uses a mutual information-based templatematching scheme with intelligently selected templates. We presented principles of templatematching with mutual information for mammography before. In this paper, we present an implementation of those principles in a complete computer-aided detection system. The proposed system, through an automatic optimization process, chooses the most useful templates (mammographic regions of interest) using a large database of previously collected and annotated mammograms. Through this process, the knowledge about the task of detecting masses in mammograms is incorporated in the system. Then, we evaluate whether our system developed for screen-film mammograms can be successfully applied not only to other mammograms but also to digital breast tomosynthesis (DBT) reconstructed slices without adding any DBT cases for training. Our rationale is that since mutual information is known to be a robust inter-modality image similarity measure, it has high potential of transferring knowledge between modalities in the context of the mass detection

This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "templatematching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have < 1 meter spatial resolution and were downloaded from the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.

One of the main challenges in automatic target tracking applications is represented by the need to maintain a low computational footprint, especially when dealing with real-time scenarios and the limited resources of embedded environments. In this context, significant results can be obtained by using forward-looking infrared sensors capable of providing distinctive features for targets of interest. In fact, due to their nature, forward-looking infrared (FLIR) images lend themselves to being used with extremely small footprint techniques based on the extraction of target intensity profiles. This work proposes a method for increasing the computational efficiency of template-based target tracking algorithms. In particular, the speed of the algorithm is improved by using a dynamic threshold that narrows the number of computations, thus reducing both execution time and resources usage. The proposed approach has been tested on several datasets, and it has been compared to several target tracking techniques. Gathered results, both in terms of theoretical analysis and experimental data, showed that the proposed approach is able to achieve the same robustness of reference algorithms by reducing the number of operations needed and the processing time. PMID:25093344

One of the main challenges in automatic target tracking applications is represented by the need to maintain a low computational footprint, especially when dealing with real-time scenarios and the limited resources of embedded environments. In this context, significant results can be obtained by using forward-looking infrared sensors capable of providing distinctive features for targets of interest. In fact, due to their nature, forward-looking infrared (FLIR) images lend themselves to being used with extremely small footprint techniques based on the extraction of target intensity profiles. This work proposes a method for increasing the computational efficiency of template-based target tracking algorithms. In particular, the speed of the algorithm is improved by using a dynamic threshold that narrows the number of computations, thus reducing both execution time and resources usage. The proposed approach has been tested on several datasets, and it has been compared to several target tracking techniques. Gathered results, both in terms of theoretical analysis and experimental data, showed that the proposed approach is able to achieve the same robustness of reference algorithms by reducing the number of operations needed and the processing time. PMID:25093344

Spinal cord segmentation is a developing area of research intended to aid the processing and interpretation of advanced magnetic resonance imaging (MRI). For example, high resolution three-dimensional volumes can be segmented to provide a measurement of spinal cord atrophy. Spinal cord segmentation is difficult due to the variety of MRI contrasts and the variation in human anatomy. In this study we propose a new method of spinal cord segmentation based on one-dimensional templatematching and provide several metrics that can be used to compare with other segmentation methods. A set of ground-truth data from 10 subjects was manually-segmented by two different raters. These ground truth data formed the basis of the segmentation algorithm. A user was required to manually initialize the spinal cord center-line on new images, taking less than one minute. Templatematching was used to segment the new cord and a refined center line was calculated based on multiple centroids within the segmentation. Arc distances down the spinal cord and cross-sectional areas were calculated. Inter-rater validation was performed by comparing two manual raters (n = 10). Semi-automatic validation was performed by comparing the two manual raters to the semi-automatic method (n = 10). Comparing the semi-automatic method to one of the raters yielded a Dice coefficient of 0.91 +/- 0.02 for ten subjects, a mean distance between spinal cord center lines of 0.32 +/- 0.08 mm, and a Hausdorff distance of 1.82 +/- 0.33 mm. The absolute variation in cross-sectional area was comparable for the semi-automatic method versus manual segmentation when compared to inter-rater manual segmentation. The results demonstrate that this novel segmentation method performs as well as a manual rater for most segmentation metrics. It offers a new approach to study spinal cord disease and to quantitatively track changes within the spinal cord in an individual case and across cohorts of subjects. PMID:26445367

An outstanding question in geophysics is the degree to which the newly discovered types of slow fault slip are related to their destructive cousin - the earthquake. Here, we utilize a local network along the Oaxacan segment of the Middle American subduction zone to investigate the potential relationship between slow slip, non-volcanic tremor (NVT), and earthquakes along the subduction megathrust. We have developed a multi-station "templatematching" waveform cross correlation technique which is able to detect and locate events several orders of magnitude smaller than would be possible using more traditional techniques. Also, our templatematching procedure is capable of consistently locate events which occur during periods of increased background activity (e.g., during productive NVT, loud cultural noise, or after larger earthquakes) because the multi-station detector is finely tuned to events with similar hypocentral location and focal mechanism. The local network in the Oaxaca region allows us to focus on documented megathrust earthquake swarms, which we focus on because slow slip is hypothesized to be the cause for earthquake swarms in some tectonic environments. We identify a productive earthquake swarm in July 2006 (~600 similar earthquakes detected), which occurred during a week-long episode of productive tremor and slow slip. Families of events in this sequence were also active during larger and longer slow slip events, which provides a potential link between slow slip in the transition zone and earthquakes at the downdip end of the seismogenic portion of the megathrust. Because templatematching techniques only detect similar signals, detected waveforms can be stacked together to produce higher signal to noise ratios or cross correlated against each other to produce precise relative phase arrival times. We are using the refined signals to look for evidence of expansion or propagation of hypocenters during these earthquake swarms, which could be used as a

Long-period (LP, 0.5-5 Hz) seismicity, observed at volcanoes worldwide, is a recognized signature of unrest and eruption. Cyclic LP “drumbeating” was the characteristic seismicity accompanying the sustained dome-building phase of the 2004–2008 eruption of Mount St. Helens (MSH), WA. However, together with the LP drumbeating was a near-continuous, randomly occurring series of tiny LP seismic events (LP “subevents”), which may hold important additional information on the mechanism of seismogenesis at restless volcanoes. We employ templatematching, phase-weighted stacking, and full-waveform inversion to image the source mechanism of one multiplet of these LP subevents at MSH in July 2005. The signal-to-noise ratios of the individual events are too low to produce reliable waveform-inversion results, but the events are repetitive and can be stacked. We apply network-based templatematching to 8 days of continuous velocity waveform data from 29 June to 7 July 2005 using a master event to detect 822 network triggers. We stack waveforms for 359 high-quality triggers at each station and component, using a combination of linear and phase-weighted stacking to produce clean stacks for use in waveform inversion. The derived source mechanism pointsto the volumetric oscillation (~10 m3) of a subhorizontal crack located at shallow depth (~30 m) in an area to the south of Crater Glacier in the southern portion of the breached MSH crater. A possible excitation mechanism is the sudden condensation of metastable steam from a shallow pressurized hydrothermal system as it encounters cool meteoric water in the outer parts of the edifice, perhaps supplied from snow melt.

Long-period (LP, 0.5-5 Hz) seismicity, observed at volcanoes worldwide, is a recognized signature of unrest and eruption. Cyclic LP "drumbeating" was the characteristic seismicity accompanying the sustained dome-building phase of the 2004-2008 eruption of Mount St. Helens (MSH), WA. However, together with the LP drumbeating was a near-continuous, randomly occurring series of tiny LP seismic events (LP "subevents"), which may hold important additional information on the mechanism of seismogenesis at restless volcanoes. We employ templatematching, phase-weighted stacking, and full-waveform inversion to image the source mechanism of one multiplet of these LP subevents at MSH in July 2005. The signal-to-noise ratios of the individual events are too low to produce reliable waveform inversion results, but the events are repetitive and can be stacked. We apply network-based templatematching to 8 days of continuous velocity waveform data from 29 June to 7 July 2005 using a master event to detect 822 network triggers. We stack waveforms for 359 high-quality triggers at each station and component, using a combination of linear and phase-weighted stacking to produce clean stacks for use in waveform inversion. The derived source mechanism points to the volumetric oscillation (˜10 m3) of a subhorizontal crack located at shallow depth (˜30 m) in an area to the south of Crater Glacier in the southern portion of the breached MSH crater. A possible excitation mechanism is the sudden condensation of metastable steam from a shallow pressurized hydrothermal system as it encounters cool meteoric water in the outer parts of the edifice, perhaps supplied from snow melt.

Background The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context. Results We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns. Conclusions We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (http://bioconductor.fhcrc.org/help/bioc-views/devel/bioc/html/IPPD.html). PMID:23137144

In this study, a system for non-contact in-situ measurement of strain during tensile test of thin films by using CCD camera with marking surface of specimen by black pen was implemented as a sensing device. To improve accuracy of measurement when CCD camera is used, this paper proposed a new method for measuring strain during tensile test of specimen with micrometer size. The size of pixel of CCD camera determines resolution of measurement, but the size of pixel can not satisfy the resolution required in tensile test of thin film because the extension of the specimen is very small during the tensile test. To increase resolution of measurement, the suggested method performs an accurate subpixel matching by applying 2nd order polynomial interpolation method to the conventional templatematching. The algorithm was developed to calculate location of subpixel providing the best matching value by performing single dimensional polynomial interpolation from the results of pixel-based matching at a local region of image. The measurement resolution was less than 0.01 times of original pixel size. To verify the reliability of the system, the tensile test for the BeNi thin film was performed, which is widely used as a material in micro-probe tip. Tensile tests were performed and strains were measured using the proposed method and also the capacitance type displacement sensor for comparison. It is demonstrated that the new strain measurement system can effectively describe a behavior of materials after yield during the tensile test of the specimen at microscale with easy setup and better accuracy.

A number of methods have been developed for the automatic identification and delineation of individual tree crowns from high spatial resolution satellite image to provide support for the management and maintenance of forests both in natural and urban environments. In this paper we present a method that integrates a Marked Point Processes (MPP) model and TemplateMatching (TM) to extract individual tree crowns in two tropical environments. The MPP is an extension of Markov random fields in which objects are defined by their position within a space of possible positions and their marks (e.g. shape). The MPP has been increasingly used for the recognition of objects but most implementation use an oversimplified model as mark. We argue that the MPP could take better advantage of the geometry of trees by incorporating a three-dimensional model as a mark. Conversely, TM is an approach to pattern recognition that takes the characteristics of the objects into account. Our method uses cross-correlation for determining which objects have been correctly targeted by the MPP. The correlation between the illuminated 3D crown model and the image is an inheritance from TM. The methodology was applied in synthetic images and sub-images of the WorldView satellite in two different contexts in Brazil. The results are validated by counting the correctly identified trees and by comparing their size with our interpreted version. Results are encouraging with 65 to 90% of correctly identified trees. The most difficult cases are mostly related to the existence of clustered tree crowns.

The DOE Division of Waste Products through a lead office at Savannah River is developing a program to immobilize all US high-level nuclear waste for terminal disposal. DOE high-level wastes include those at the Hanford Plant, the Idaho Chemical Processing Plant, and the Savannah River Plant. Commercial high-level wastes, for which DOE is also developing immobilization technology, include those at the Nuclear Fuel Services Plant and any future commercial fuels reprocessing plants. The first immobilization plant is to be the Defense Waste Processing Facility at Savannah River, scheduled for 1983 project submission to Congress and 1989 operation. Waste forms are still being selected for this plant. Borosilicate glass is currently the reference form, but alternate candidates include concretes, calcines, other glasses, ceramics, and matrix forms.

The general problem of computing the false-alarm probability vs the detection-threshold relationship for a bank of correlators is addressed, in the context of maximum-likelihood detection of gravitational waves in additive stationary Gaussian noise. Specific reference is made to chirps from coalescing binary systems. Accurate (lower-bound) approximants for the cumulative distribution of the whole-bank supremum are deduced from a class of Bonferroni-type inequalities. The asymptotic properties of the cumulative distribution are obtained, in the limit where the number of correlators goes to infinity. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian-correlation inequality. The result is used to readdress the problem of relating the template density to the fraction of potentially observable sources which could be dismissed as an effect of template space discreteness.

Templating processes for creating polymerized hydrogels are reviewed. The use of contact photonic crystals and of non-contact colloidal crystalline arrays as templates are described and applications to chemical sensing and device fabrication are illustrated. Emulsion templating is illustrated in the formation of microporous membranes, and templating on reverse emulsions and double emulsions is described. Templating in solutions of macromolecules and micelles is discussed and then various applications of hydrogel templating on surfactant liquid crystalline mesophases are illustrated, including a nanoscale analogue of colloidal crystalline array templating, except that the bead array in this case is a cubic array of nonionic micelles. The use of particles as templates in making core-shell and hollow microgel beads is described, as is the use of membrane pores as another illustration of confinement templating. PMID:19816529

If society is ever to reap the potential benefits of nuclear energy, technologists must close the fuel-cycle completely. A closed cycle equates to a continued supply of fuel and safe reactors, but also reliable and comprehensive closure of waste issues. Highlevel waste (HLW) disposal in borosilicate glass (BSG) is based on 1970s era evaluations. This host matrix is very adaptable to sequestering a wide variety of radionuclides found in raffinates from spent fuel reprocessing. However, it is now known that the current system is far from optimal for disposal of the diverse HLW streams, and proven alternatives are available to reduce costs by billions of dollars. The basis for HLW disposal should be reassessed to consider extensive waste form and process technology research and development efforts, which have been conducted by the United States Department of Energy (USDOE), international agencies and the private sector. Matching the waste form to the waste chemistry and using currently available technology could increase the waste content in waste forms to 50% or more and double processing rates. Optimization of the HLW disposal system would accelerate HLW disposition and increase repository capacity. This does not necessarily require developing new waste forms, the emphasis should be on qualifying existing matrices to demonstrate protection equal to or better than the baseline glass performance. Also, this proposed effort does not necessarily require developing new technology concepts. The emphasis is on demonstrating existing technology that is clearly better (reliability, productivity, cost) than current technology, and justifying its use in future facilities or retrofitted facilities. Higher waste processing and disposal efficiency can be realized by performing the engineering analyses and trade-studies necessary to select the most efficient methods for processing the full spectrum of wastes across the nuclear complex. This paper will describe technologies being

The introduction of Short Tandem Repeat (STR) DNA was a revolution within a revolution that transformed forensic DNA profiling into a tool that could be used, for the first time, to create National DNA databases. This transformation would not have been possible without the concurrent development of fluorescent automated sequencers, combined with the ability to multiplex several loci together. Use of the polymerase chain reaction (PCR) increased the sensitivity of the method to enable the analysis of a handful of cells. The first multiplexes were simple: 'the quad', introduced by the defunct UK Forensic Science Service (FSS) in 1994, rapidly followed by a more discriminating 'six-plex' (Second Generation Multiplex) in 1995 that was used to create the world's first national DNA database. The success of the database rapidly outgrew the functionality of the original system - by the year 2000 a new multiplex of ten-loci was introduced to reduce the chance of adventitious matches. The technology was adopted world-wide, albeit with different loci. The political requirement to introduce pan-European databases encouraged standardisation - the development of European Standard Set (ESS) of markers comprising twelve-loci is the latest iteration. Although development has been impressive, the methods used to interpret evidence have lagged behind. For example, the theory to interpret complex DNA profiles (low-level mixtures), had been developed fifteen years ago, but only in the past year or so, are the concepts starting to be widely adopted. A plethora of different models (some commercial and others non-commercial) have appeared. This has led to a confusing 'debate' about the 'best' to use. The different models available are described along with their advantages and disadvantages. A section discusses the development of national DNA databases, along with details of an associated controversy to estimate the strength of evidence of matches. Current methodology is limited to

Analysis of risks, environmental effects, process feasibility, and costs for disposal of immobilized high-level wastes in geologic repositories indicates that the disposal system safety has a low sensitivity to the choice of the waste disposal form.

One of the critical steps in designing a secure biometric system is protecting the templates of the users that are stored either in a central database or on smart cards. If a biometric template is compromised, it leads to serious security and privacy threats because unlike passwords, it is not possible for a legitimate user to revoke his biometric identifiers and switch to another set of uncompromised identifiers. One methodology for biometric template protection is the template transformation approach, where the template, consisting of the features extracted from the biometric trait, is transformed using parameters derived from a user specific password or key. Only the transformed template is stored and matching is performed directly in the transformed domain. In this paper, we formally investigate the security strength of template transformation techniques and define six metrics that facilitate a holistic security evaluation. Furthermore, we analyze the security of two wellknown template transformation techniques, namely, Biohashing and cancelable fingerprint templates based on the proposed metrics. Our analysis indicates that both these schemes are vulnerable to intrusion and linkage attacks because it is relatively easy to obtain either a close approximation of the original template (Biohashing) or a pre-image of the transformed template (cancelable fingerprints). We argue that the security strength of template transformation techniques must consider also consider the computational complexity of obtaining a complete pre-image of the transformed template in addition to the complexity of recovering the original biometric template.

Presents a method to calculate the amount of high-level radioactive waste by taking into consideration the following factors: the fission process that yields the waste, identification of the waste, the energy required to run a 1-GWe plant for one year, and the uranium mass required to produce that energy. Briefly discusses waste disposal and…

How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a "template," i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets. PMID:24627458

Biometric are a powerful technology for identifying humans both locally and at a distance. In order to perform identification or verification biometric systems capture an image of some biometric of a user or subject. The image is then converted mathematical to representation of the person call a template. Since we know that every human in the world is different each human will have different biometric images (different fingerprints, or faces, etc.). This is what makes biometrics useful for identification. However unlike a credit card number or a password to can be given to a person and later revoked if it is compromised and biometric is with the person for life. The problem then is to develop biometric templates witch can be easily revoked and reissued which are also unique to the user and can be easily used for identification and verification. In this paper we develop and present a method to generate a set of templates which are fully unique to the individual and also revocable. By using bases set compression algorithms in an n-dimensional orthogonal space we can represent a give biometric image in an infinite number of equally valued and unique ways. The verification and biometric matching system would be presented with a given template and revocation code. The code will then representing where in the sequence of n-dimensional vectors to start the recognition.

In automatic fingerprint identification system, incomplete or rigid template may lead to false rejection and false matching. So, how to improve quality of the template, which is called template improvement, is important to automatic fingerprint identify system. In this paper, we propose a template improve algorithm. Based on the case-based method of machine learning and probability theory, we improve the template by deleting pseudo minutia, restoring lost genuine minutia and updating the information of minutia such as positions and directions. And special fingerprint image database is built for this work. Experimental results on this database indicate that our method is effective and quality of fingerprint template is improved evidently. Accordingly, performance of fingerprint matching is also improved stably along with the increase of using time.

The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the HighLevel Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the sustainable output rate, and the selection efficiency. Here we will present the performance of the main triggers used during the 2012 data taking, ranging from simpler single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We will discuss the optimisation of the triggers and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.

The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger, implemented in custom-designed electronics, and the High-Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running with the available computing power, the sustainable output rate, and the selection efficiency. We present the performance of the main triggers used during the 2012 data taking, ranging from simple single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We discuss the optimisation of the trigger and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.

We report a bioinspired templating technique for fabricating multifunctional optical coatings that mimic both unique functionalities of antireflective moth eyes and superhydrophobic cicada wings. Subwavelength-structured fluoropolymer nipple arrays are created by a soft-lithography-like process. The utilization of fluoropolymers simultaneously enhances the antireflective performance and the hydrophobicity of the replicated films. The specular reflectivity matches the optical simulation using a thin-film multilayer model. The dependence of the size and the crystalline ordering of the replicated nipples on the resulting antireflective properties have also been investigated by experiment and modeling. These biomimetic materials may find important technological application in self-cleaning antireflection coatings.

This paper presents an algorithm for constructing matched-filter template banks in an arbitrary parameter space. The method places templates at random, then removes those which are 'too close' together. The properties and optimality of stochastic template banks generated in this manner are investigated for some simple models. The effectiveness of these template banks for gravitational wave searches for binary inspiral waveforms is also examined. The properties of a stochastic template bank are then compared to the deterministically placed template banks that are currently used in gravitational wave data analysis.

In order to enhance the robustness of building recognition in forward-looking infrared (FLIR) images, an effective method based on big template is proposed. Big template is a set of small templates which contains a great amount of information of surface features. Its information content cannot be matched by any small template and it has advantages in conquering noise interference or incompleteness and avoiding erroneous judgments. Firstly, digital surface model (DSM) was utilized to make big template, distance transformation was operated on the big template, and region of interest (ROI) was extracted by the way of templatematching between the big template and contour of real-time image. Secondly, corners were detected from the big template, response function was defined by utilizing gradients and phases of corners and their neighborhoods, a kind of similarity measure was designed based on the response function and overlap ratio, then the template and real-time image were matched accurately. Finally, a large number of image data was used to test the performance of the algorithm, and optimal parameters selection criterion was designed. Test results indicate that the target matching ratio of the algorithm can reach 95%, it has effectively solved the problem of building recognition under the conditions of noise disturbance, incompleteness or the target is not in view.

High-level regions of the ventral stream exhibit strong category selectivity to stimuli such as faces, houses, or objects. However, recent studies suggest that at least part of this selectivity stems from low-level differences inherent to images of the different categories. For example, visual outdoor and indoor scenes as well as houses differ in spatial frequency, rectilinearity and obliqueness when compared to face or object images. Correspondingly, scene responsive para-hippocampal place area (PPA) showed strong preference to low-level properties of visual scenes also in the absence of high-level scene content. This raises the question whether all high-level responses in PPA, the fusiform face area (FFA), or the object-responsive lateral occipital compex (LOC) may actually be explained by systematic differences in low-level features. In the present study we contrasted two classes of simple stimuli consisting of ten rectangles each. While both were matched in visual low-level features only one class of rectangle arrangements gave rise to a percept compatible with a high-level 3D layout such as a scene or an object. We found that areas PPA, transverse occipital sulcus (TOS, also referred to as occipital place area, OPA), as well as FFA and LOC showed robust responses to the visual scene class compared to the low-level matched control. Our results suggest that visual category responsive regions are not purely driven by low-level visual features but also by the high-level perceptual stimulus interpretation. PMID:26975552

Spatial normalization reshapes an individual’s brain to match the shape and size of a template image. This is a crucial step required for group-level statistical analyses. The most popular standard templates are derived from MRI scans of young adults. We introduce specialized templates that allow normalization algorithms to be applied to stroke-aged populations. First, we developed a CT template: while this is the dominant modality for many clinical situations, there are no modern CT templates and popular algorithms fail to successfully normalize CT scans. Importantly, our template was based on healthy individuals with ages similar to what is commonly seen in stroke (mean 65 years old). This template allows studies where only CT scans are available. Second, we derived a MRI template that approximately matches the shape of our CT template as well as processing steps that aid the normalization of scans from older individuals (including lesion masking and the ability to generate high quality cortical renderings despite brain injury). The benefit of this strategy is that the resulting templates can be used in studies where mixed modalities are present. We have integrated these templates and processing algorithms into a simple SPM toolbox (http://www.mccauslandcenter.sc.edu/CRNL/tools/spm8-scripts). PMID:22440645

Perceptual learning changes the way the human visual system processes stimulus information. Previous studies have shown that the human brain's weightings of visual information (the perceptual template) become better matched to the optimal weightings. However, the dynamics of the template changes are not well understood. We used the classification image method to investigate whether visual field or stimulus properties govern the dynamics of the changes in the perceptual template. A line orientation discrimination task where highly informative parts were placed in the peripheral visual field was used to test three hypotheses: (1) The template changes are determined by the visual field structure, initially covering stimulus parts closer to the fovea and expanding toward the periphery with learning; (2) the template changes are object centered, starting from the center and expanding toward edges; and (3) the template changes are determined by stimulus information, starting from the most informative parts and expanding to less informative parts. Results show that, initially, the perceptual template contained only the more peripheral, highly informative parts. Learning expanded the template to include less informative parts, resulting in an increase in sampling efficiency. A second experiment interleaved parts with high and low signal-to-noise ratios and showed that template reweighting through learning was restricted to stimulus elements that are spatially contiguous to parts with initial high template weights. The results suggest that the informativeness of features determines how the perceptual template changes with learning. Further, the template expansion is constrained by spatial proximity. PMID:25194018

Template tracking dates back to the 1981 Lucas-Kanade algorithm. One question that has received very little attention, however, is how to update the template so that it remains a good model of the tracked object. We propose a template update algorithm that avoids the "drifting" inherent in the naive algorithm. PMID:18579941

Most perceptual decisions require comparisons between current input and an internal template. Classic studies propose that templates are encoded in sustained activity of sensory neurons. However, stimulus encoding is itself dynamic, tracing a complex trajectory through activity space. Which part of this trajectory is pre-activated to reflect the template? Here we recorded magneto- and electroencephalography during a visual target-detection task, and used pattern analyses to decode template, stimulus, and decision-variable representation. Our findings ran counter to the dominant model of sustained pre-activation. Instead, template information emerged transiently around stimulus onset and quickly subsided. Cross-generalization between stimulus and template coding, indicating a shared neural representation, occurred only briefly. Our results are compatible with the proposal that template representation relies on a matched filter, transforming input into task-appropriate output. This proposal was consistent with a signed difference response at the perceptual decision stage, which can be explained by a simple neural model. PMID:26653854

Most perceptual decisions require comparisons between current input and an internal template. Classic studies propose that templates are encoded in sustained activity of sensory neurons. However, stimulus encoding is itself dynamic, tracing a complex trajectory through activity space. Which part of this trajectory is pre-activated to reflect the template? Here we recorded magneto- and electroencephalography during a visual target-detection task, and used pattern analyses to decode template, stimulus, and decision-variable representation. Our findings ran counter to the dominant model of sustained pre-activation. Instead, template information emerged transiently around stimulus onset and quickly subsided. Cross-generalization between stimulus and template coding, indicating a shared neural representation, occurred only briefly. Our results are compatible with the proposal that template representation relies on a matched filter, transforming input into task-appropriate output. This proposal was consistent with a signed difference response at the perceptual decision stage, which can be explained by a simple neural model. DOI: http://dx.doi.org/10.7554/eLife.09000.001 PMID:26653854

A modular software platform for highlevel applications is under development at the National Synchrotron Light Source II project. This platform is based on client-server architecture, and the components of highlevel applications on this platform will be modular and distributed, and therefore reusable. An online model server is indispensable for model based control. Different accelerator facilities have different requirements for the online simulation. To supply various accelerator simulators, a set of narrow and general application programming interfaces is developed based on Tracy-3 and Elegant. This paper describes the system architecture for the modular highlevel applications, the design of narrow and general application programming interface for an online model server, and the prototype of online model server.

The national highlevel waste disposal plans for France, the Federal Republic of Germany, Japan, and the United States are covered. Three conclusions are reached. The first conclusion is that an excellent technology already exists for highlevel waste disposal. With appropriate packaging, spent fuel seems to be an acceptable waste form. Borosilicate glass reprocessing waste forms are well understood, in production in France, and scheduled for production in the next few years in a number of other countries. For final disposal, a number of candidate geological repository sites have been identified and several demonstration sites opened. The second conclusion is that adequate financing and a legal basis for waste disposal are in place in most countries. Costs of highlevel waste disposal will probably and about 5 to 10% to the costs of nuclear electric power. Third conclusion is less optimistic.

A framework for highlevel accelerator application software is being developed for the Linac Coherent Light Source (LCLS). The framework is based on plug-in technology developed by an open source project, Eclipse. Many existing functionalities provided by Eclipse are available to high-level applications written within this framework. The framework also contains static data storage configuration and dynamic data connectivity. Because the framework is Eclipse-based, it is highly compatible with any other Eclipse plug-ins. The entire infrastructure of the software framework will be presented. Planned applications and plug-ins based on the framework are also presented.

This bibliography contains information on high-level radioactive wastes included in the Department of Energy's Energy Data Base from August 1982 through December 1983. These citations are to research reports, journal articles, books, patents, theses, and conference papers from worldwide sources. Five indexes, each preceded by a brief description, are provided: Corporate Author, Personal Author, Subject, Contract Number, and Report Number. 1452 citations.

The primary objective of this study is to demonstrate a mission scenario that uses pairwise and incidental blending of highlevel waste (HLW) to reduce the total mass of HLW glass. Secondary objectives include understanding how recent refinements to the tank waste inventory and solubility assumptions affect the mass of HLW glass and how logistical constraints may affect the efficacy of HLW blending.

Modern supernova (SN) surveys are now uncovering stellar explosions at rates that far surpass what the world's spectroscopic resources can handle. In order to make full use of these SN data sets, it is necessary to use analysis methods that depend only on the survey photometry. This paper presents two methods for utilizing a set of SN light-curve templates to classify SN objects. In the first case, we present an updated version of the Bayesian Adaptive TemplateMatching program (BATM). To address some shortcomings of that strictly Bayesian approach, we introduce a method for Supernova Ontology with Fuzzy Templates (SOFT), which utilizes fuzzy set theory for the definition and combination of SN light-curve models. For well-sampled light curves with a modest signal-to-noise ratio (S/N >10), the SOFT method can correctly separate thermonuclear (Type Ia) SNe from core collapse SNe with >=98% accuracy. In addition, the SOFT method has the potential to classify SNe into sub-types, providing photometric identification of very rare or peculiar explosions. The accuracy and precision of the SOFT method are verified using Monte Carlo simulations as well as real SN light curves from the Sloan Digital Sky Survey and the SuperNova Legacy Survey. In a subsequent paper, the SOFT method is extended to address the problem of parameter estimation, providing estimates of redshift, distance, and host galaxy extinction without any spectroscopy.

Program understanding is a subfield of software reengineering and attempts to recognize the run-time behavior of source code. To this point, success in this area has been limited to very small code segments. An expert system, HLAR (High-Level Algorithm Recognizer), has been written in CLIPS and recognizes three sorting algorithms, selection sort, quicksort, and heapsort. This paper describes the HLAR system in general and, in depth, the CLIPS templates used for program representation and understanding.

At the Hanford Site in Richland, Washington, the path to site cleanup involves vitrification of the majority of the wastes that currently reside in large underground tanks. A Joule-heated glass melter is the equipment of choice for vitrifying the high-level fraction of these wastes. Even though this technology has general national and international acceptance, opportunities may exist to improve or change the technology to reduce the enormous cost of accomplishing the mission of site cleanup. Consequently, the U.S. Department of Energy requested the staff of the Tanks Focus Area to review immobilization technologies, waste forms, and modifications to requirements for solidification of the high-level waste fraction at Hanford to determine what aspects could affect cost reductions with reasonable long-term risk. The results of this study are summarized in this report.

A template for imprint lithography (IL) that reduces significantly template production costs by allowing the same template to be re-used for several technology generations. The template is composed of an array of spaced-apart moveable and individually addressable rods or plungers. Thus, the template can be configured to provide a desired pattern by programming the array of plungers such that certain of the plungers are in an "up" or actuated configuration. This arrangement of "up" and "down" plungers forms a pattern composed of protruding and recessed features which can then be impressed onto a polymer film coated substrate by applying a pressure to the template impressing the programmed configuration into the polymer film. The pattern impressed into the polymer film will be reproduced on the substrate by subsequent processing.

Structural characterization of protein-protein interactions is important for understanding life processes. Because of the inherent limitations of experimental techniques, such characterization requires computational approaches. Along with the traditional protein-protein docking (free search for a match between two proteins), comparative (template-based) modeling of protein-protein complexes has been gaining popularity. Its development puts an emphasis on full and partial structural similarity between the target protein monomers and the protein-protein complexes previously determined by experimental techniques (templates). The template-based docking relies on the quality and diversity of the template set. We present a carefully curated, non-redundant library of templates containing 4,950 full structures of binary complexes and 5,936 protein-protein interfaces extracted from the full structures at 12Å distance cut-off. Redundancy in the libraries was removed by clustering the PDB structures based on structural similarity. The value of the clustering threshold was determined from the analysis of the clusters and the docking performance on a benchmark set. High structural quality of the interfaces in the template and validation sets was achieved by automated procedures and manual curation. The library is included in the Dockground resource for molecular recognition studies at http://dockground.bioinformatics.ku.edu. PMID:25488330

EAP technology has the potential to be used in a wide range of applications. This poses the challenge to the EAP component manufacturers to develop components for a wide variety of products. Danfoss Polypower A/S is developing an EAP technology platform, which can form the basis for a variety of EAP technology products while keeping complexity under control. Highlevel product architecture has been developed for the mechanical part of EAP transducers, as the foundation for platform development. A generic description of an EAP transducer forms the core of the highlevel product architecture. This description breaks down the EAP transducer into organs that perform the functions that may be present in an EAP transducer. A physical instance of an EAP transducer contains a combination of the organs needed to fulfill the task of actuator, sensor, and generation. Alternative principles for each organ allow the function of the EAP transducers to be changed, by basing the EAP transducers on a different combination of organ alternatives. A model providing an overview of the highlevel product architecture has been developed to support daily development and cooperation across development teams. The platform approach has resulted in the first version of an EAP technology platform, on which multiple EAP products can be based. The contents of the platform have been the result of multi-disciplinary development work at Danfoss PolyPower, as well as collaboration with potential customers and research institutions. Initial results from applying the platform on demonstrator design for potential applications are promising. The scope of the article does not include technical details.

At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the 'High-Level Trigger'(HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, {tau} leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

This paper will attempt to survey the current knowledge on the effects of relative highlevels of infrasound on humans. While this conference is concerned mainly about hearing, some discussion of other physiological effects is appropriate. Such discussion also serves to highlight a basic question, 'Is hearing the main concern of infrasound and low frequency exposure, or is there a more sensitive mechanism'. It would be comforting to know that the focal point of this conference is indeed the most important concern. Therefore, besides hearing loss and auditory threshold of infrasonic and low frequency exposure, four other effects will be provided. These are performance, respiration, annoyance, and vibration.

A vitrification facility is being developed by the U.S. Department of Energy (DOE) at the West Valley Demonstration Plant (WVDP) near Buffalo, New York, where approximately 300 canisters of high-level nuclear waste glass will be produced. To assure that the produced waste form is acceptable, uncertainty must be managed. Statistical issues arise due to sampling, waste variations, processing uncertainties, and analytical variations. This paper presents elements of a strategy to characterize and manage the uncertainties associated with demonstrating that an acceptable waste form product is achieved. Specific examples are provided within the context of statistical work performed by Pacific Northwest Laboratory (PNL).

Standalone highlevel applications often suffer from poor performance and reliability due to lengthy initialization, heavy computation and rapid graphical update. Service-oriented architecture (SOA) is trying to separate the initialization and computation from applications and to distribute such work to various service providers. Heavy computation such as beam tracking will be done periodically on a dedicated server and data will be available to client applications at all time. Industrial standard service architecture can help to improve the performance, reliability and maintainability of the service. Robustness will also be improved by reducing the complexity of individual client applications.

Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

A highlevel robot command language is proposed for the autonomous mode of an advanced telerobotics system and a predictive display mechanism for the teleoperational model. It is believed that any such system will involve some mixture of these two modes, since, although artificial intelligence can facilitate significant autonomy, a system that can resort to teleoperation will always have the advantage. The highlevel command language will allow humans to give the robot instructions in a very natural manner. The robot will then analyze these instructions to infer meaning so that is can translate the task into lower level executable primitives. If, however, the robot is unable to perform the task autonomously, it will switch to the teleoperational mode. The time delay between control movement and actual robot movement has always been a problem in teleoperations. The remote operator may not actually see (via a monitor) the results of high actions for several seconds. A computer generated predictive display system is proposed whereby the operator can see a real-time model of the robot's environment and the delayed video picture on the monitor at the same time.

Tc contamination is found within the DOE complex at those sites whose mission involved extraction of plutonium from irradiated uranium fuel or isotopic enrichment of uranium. At the Hanford Site, chemical separations and extraction processes generated large amounts of highlevel and transuranic wastes that are currently stored in underground tanks. The waste from these extraction processes is currently stored in underground HighLevel Waste (HLW) tanks. However, the chemistry of the HLW in any given tank is greatly complicated by repeated efforts to reduce volume and recover isotopes. These processes ultimately resulted in mixing of waste streams from different processes. As a result, the chemistry and the fate of Tc in HLW tanks are not well understood. This lack of understanding has been made evident in the failed efforts to leach Tc from sludge and to remove Tc from supernatants prior to immobilization. Although recent interest in Tc chemistry has shifted from pretreatment chemistry to waste residuals, both needs are served by a fundamental understanding of Tc chemistry.

The major achievement of this semiannum was the significant revision and extension of the Recursive Auto-Associative Memory (RAAM) work for publication in the journal Artificial Intelligence. Included as an appendix to this report, the article includes several new elements: (1) Background - The work was more clearly set into the area of recursive distributed representations, machine learning, and the adequacy of the connectionist approach for high-level cognitive modeling; (2) New Experiment - RAAM was applied to finding compact representations for sequences of letters; (3) Analysis - The developed representations were analyzed as features which range from categorical to distinctive. Categorical features distinguish between conceptual categories while distinctive features vary within categories and discriminate or label the members. The representations were also analyzed geometrically; and (4) Applications - Feasibility studies were performed and described on inference by association, and on using RAAM-generated patterns along with cascaded networks for natural language parsing. Both of these remain long-term goals of the project.

This report describes Umbra's HighLevel Architecture HLA library. This library serves as an interface to the Defense Simulation and Modeling Office's (DMSO) Run Time Infrastructure Next Generation Version 1.3 (RTI NG1.3) software library and enables Umbra-based models to be federated into HLA environments. The Umbra library was built to enable the modeling of robots for military and security system concept evaluation. A first application provides component technologies that ideally fit the US Army JPSD's Joint Virtual Battlespace (JVB) simulation framework for Objective Force concept analysis. In addition to describing the Umbra HLA library, the report describes general issues of integrating Umbra with RTI code and outlines ways of building models to support particular HLA simulation frameworks like the JVB.

Airway epithelial cells act as a physical barrier against environmental toxins and injury, and modulate inflammation and the immune response. As such, maintenance of their integrity is critical. Evidence is accumulating to suggest that exercise can cause injury to the airway epithelium. This seems the case particularly for competitive athletes performing high-level exercise, or when exercise takes place in extreme environmental conditions such as in cold dry air or in polluted air. Dehydration of the small airways and increased forces exerted on to the airway surface during severe hyperpnoea are thought to be key factors in determining the occurrence of injury of the airway epithelium. The injury-repair process of the airway epithelium may contribute to the development of the bronchial hyper-responsiveness that is documented in many elite athletes. PMID:22247295

The European Southern Observatory (ESO) provides pipelines to reduce data for most of the instruments at its Very Large telescope (VLT). These pipelines are written as part of the development of VLT instruments, and are used both in the ESO's operational environment and by science users who receive VLT data. All the pipelines are highly specific geared toward instruments. However, experience showed that the independently developed pipelines include significant overlap, duplication and slight variations of similar algorithms. In order to reduce the cost of development, verification and maintenance of ESO pipelines, and at the same time improve the scientific quality of pipelines data products, ESO decided to develop a limited set of versatile high-level scientific functions that are to be used in all future pipelines. The routines are provided by the High-level Data Reduction Library (HDRL). To reach this goal, we first compare several candidate algorithms and verify them during a prototype phase using data sets from several instruments. Once the best algorithm and error model have been chosen, we start a design and implementation phase. The coding of HDRL is done in plain C and using the Common Pipeline Library (CPL) functionality. HDRL adopts consistent function naming conventions and a well defined API to minimise future maintenance costs, implements error propagation, uses pixel quality information, employs OpenMP to take advantage of multi-core processors, and is verified with extensive unit and regression tests. This poster describes the status of the project and the lesson learned during the development of reusable code implementing algorithms of high scientific quality.

Similar to traditional CMOS circuits, quantum circuit design flow is divided into two main processes: logic synthesis and physical design. Addressing the limitations imposed on optimization of the quantum circuit metrics because of no information sharing between logic synthesis and physical design processes, the concept of "physical synthesis" was introduced for quantum circuit flow, and a few techniques were proposed for it. Following that concept, in this paper a new approach for physical synthesis inspired by templatematching idea in quantum logic synthesis is proposed to improve the latency of quantum circuits. Experiments show that by using templatematching as a physical synthesis approach, the latency of quantum circuits can be improved by more than 23.55 % on average.

Given a pattern p over an alphabet Σ p and a text t over an alphabet Σ t , we consider the problem of determining a mapping f from Σ p to {Σ}t+ such that t = f(p 1)f(p 2)...f(p m ). This class of problems, which was first introduced by Amir and Nor in 2004, is defined by different constraints on the mapping f. We give NP-Completeness results for a wide range of conditions. These include when f is either many-to-one or one-to-one, when Σ t is binary and when the range of f is limited to strings of constant length. We then introduce a related problem we term pattern matching with string classes which we show to be solvable efficiently. Finally, we discuss an optimisation variant of generalised matching and give a polynomial-time min (1,sqrt{k/OPT})-approximation algorithm for fixed k.

Plant viruses are considered as nanobuilding blocks that can be used as synthons or templates for novel materials. Cowpea mosaic virus (CPMV) particles have been shown to template the fabrication of metallic nanoparticles by an electroless deposition metallization process. Palladium ions were electrostatically bound to the virus capsid and, when reduced, acted as nucleation sites for the subsequent metal deposition from solution. The method, although simple, produced highly monodisperse metallic nanoparticles with a diameter of ca. ≤35 nm. CPMV-templated particles were prepared with cobalt, nickel, iron, platinum, cobalt-platinum and nickel-iron. PMID:20877898

There is a long-standing, sometimes contentious debate in AI concerning the relative merits of a symbolic, top-down approach vs. a neural, bottom-up approach to engineering intelligent machine behaviors. While neurocomputational methods excel at lower-level cognitive tasks (incremental learning for pattern classification, low-level sensorimotor control, fault tolerance and processing of noisy data, etc.), they are largely non-competitive with top-down symbolic methods for tasks involving high-level cognitive problem solving (goal-directed reasoning, metacognition, planning, etc.). Here we take a step towards addressing this limitation by developing a purely neural framework named galis. Our goal in this work is to integrate top-down (non-symbolic) control of a neural network system with more traditional bottom-up neural computations. galis is based on attractor networks that can be "programmed" with temporal sequences of hand-crafted instructions that control problem solving by gating the activity retention of, communication between, and learning done by other neural networks. We demonstrate the effectiveness of this approach by showing that it can be applied successfully to solve sequential card matching problems, using both human performance and a top-down symbolic algorithm as experimental controls. Solving this kind of problem makes use of top-down attention control and the binding together of visual features in ways that are easy for symbolic AI systems but not for neural networks to achieve. Our model can not only be instructed on how to solve card matching problems successfully, but its performance also qualitatively (and sometimes quantitatively) matches the performance of both human subjects that we had perform the same task and the top-down symbolic algorithm that we used as an experimental control. We conclude that the core principles underlying the galis framework provide a promising approach to engineering purely neurocomputational systems for problem

Based on analyzing the traditional templatematching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

Multinomial pattern matching (MPM) is an automatic target recognition algorithm developed for specifically radar data at Sandia National Laboratories. The algorithm is in a family of algorithms that first quantizes pixel value into Nq bins based on pixel amplitude before training and classification. This quantization step reduces the sensitivity of algorithm performance to absolute intensity variation in the data, typical of radar data where signatures exhibit high variation for even small changes in aspect angle. Our previous work has focused on performance analysis of peaky templatematching, a special case of MPM where binary quantization is used (Nq = 2). Unfortunately references on these algorithms are generally difficult to locate and here we revisit the MPM algorithm and illustrate the underlying statistical model and decision rules for two algorithm interpretations: the 1-of-K vector form and the scalar. MPM can also be used as a detector and specific attention is given to algorithm tuning where "peak pixels" are chosen based on their underlying empirical probabilities according to a reward minimization strategy aimed at reducing false alarms in the detection scenario and false positives in a classification capacity. The algorithms are demonstrated using Monte Carlo simulations on the AFRL civilian vehicle dataset for variety of choices of Nq.

The presentation covers the standard Terms and Conditions, from reporting, to Human Subject research, to publication disclaimers, and offers some resources to find helpful information. Some slides are intended as a template, where project officers can enter specific information (...

Device installs plugs and then drills them after sandwich face sheets are in place. Template guides drill bit into center of each concealed plug thereby saving considerable time and fostering weight reduction with usage of smaller plugs.

The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called HighLevel Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved tracking and vertexing algorithms, discussing their impact on the b-tagging performance as well as on the jet and missing energy reconstruction.

The two-level trigger system employed by CMS consists of the Level 1 (L1) Trigger, which is implemented using custom-built electronics, and the HighLevel Trigger (HLT), a farm of commercial CPUs running a streamlined version of the offline CMS reconstruction software. The operational L1 output rate of 100 kHz, together with the number of CPUs in the HLT farm, imposes a fundamental constraint on the amount of time available for the HLT to process events. Exceeding this limit impacts the experiment's ability to collect data efficiently. Hence, there is a critical need to characterize the performance of the HLT farm as well as the algorithms run prior to start up in order to ensure optimal data taking. Additional complications arise from the fact that the HLT farm consists of multiple generations of hardware and there can be subtleties in machine performance. We present our methods of measuring the timing performance of the CMS HLT, including the challenges of making such measurements. Results for the performance of various Intel Xeon architectures from 2009-2014 and different data taking scenarios are also presented.

A highlevel RF system (HLRF) consisting of power amplifiers (PA's) and ferrite loaded cavities is being designed and built by Brookhaven National Laboratory (BNL) for the Spallation Neutron Source (SNS) project. It is a fixed frequency, two harmonic system whose main function is to maintain a gap for the kicker rise time. Three cavities running at the fundamental harmonic (h=l) will provide 40 kV and one cavity at the second harmonic (h=2) will provide 20 kV. Each cavity has two gaps with a design voltage of 10 kV per gap and will be driven by a power amplifier (PA) directly adjacent to it. The PA uses a 600kW tetrode to provide the necessary drive current. The anode of the tetrode is magnetically coupled to the downstream cell of the cavity. Drive to the PA will be provided by a wide band, solid state amplifier located remotely. A dynamic tuning scheme will be implemented to help compensate for the effect of beam loading.

This report presents evaluations of several methods for the in-process decontamination of metallic canisters containing any one of a number of solidified high-level waste (HLW) forms. The use of steam-water, steam, abrasive blasting, electropolishing, liquid honing, vibratory finishing and soaking have been tested or evaluated as potential techniques to decontaminate the outer surfaces of HLW canisters. Either these techniques have been tested or available literature has been examined to assess their applicability to the decontamination of HLW canisters. Electropolishing has been found to be the most thorough method to remove radionuclides and other foreign material that may be deposited on or in the outer surface of a canister during any of the HLW processes. Steam or steam-water spraying techniques may be adequate for some applications but fail to remove all contaminated forms that could be present in some of the HLW processes. Liquid honing and abrasive blasting remove contamination and foreign material very quickly and effectively from small areas and components although these blasting techniques tend to disperse the material removed from the cleaned surfaces. Vibratory finishing is very capable of removing the bulk of contamination and foreign matter from a variety of materials. However, special vibratory finishing equipment would have to be designed and adapted for a remote process. Soaking techniques take long periods of time and may not remove all of the smearable contamination. If soaking involves pickling baths that use corrosive agents, these agents may cause erosion of grain boundaries that results in rough surfaces.

Plant viruses are considered as nanobuilding blocks that can be used as synthons or templates for novel materials. Cowpea mosaic virus (CPMV) particles have been shown to template the fabrication of metallic nanoparticles by an electroless deposition metallization process. Palladium ions were electrostatically bound to the virus capsid and, when reduced, acted as nucleation sites for the subsequent metal deposition from solution. The method, although simple, produced highly monodisperse metallic nanoparticles with a diameter of ca. <=35 nm. CPMV-templated particles were prepared with cobalt, nickel, iron, platinum, cobalt-platinum and nickel-iron.Plant viruses are considered as nanobuilding blocks that can be used as synthons or templates for novel materials. Cowpea mosaic virus (CPMV) particles have been shown to template the fabrication of metallic nanoparticles by an electroless deposition metallization process. Palladium ions were electrostatically bound to the virus capsid and, when reduced, acted as nucleation sites for the subsequent metal deposition from solution. The method, although simple, produced highly monodisperse metallic nanoparticles with a diameter of ca. <=35 nm. CPMV-templated particles were prepared with cobalt, nickel, iron, platinum, cobalt-platinum and nickel-iron. Electronic supplementary information (ESI) available: Additional experimental detail, agarose gel electrophoresis results, energy dispersive X-ray spectra, ζ-potential measurements, dynamic light scattering data, nanoparticle tracking analysis and an atomic force microscopy image of Ni-CPMV. See DOI: 10.1039/c0nr00525h

The paper propose a novel seal extract method through templatematching based on the characteristics of the external contour of the seal image in Chinese Painting and Calligraphy. By analyzing the characteristics of the seal edge, we obtain the priori knowledge of the seal edge, and set up the outline template of the seals, then design a templatematching method by computing the distance difference between the outline template and the seal image edge which can extract seal image from Chinese Painting and Calligraphy effectively. This method is proved to have higher extraction rate by experiment results than the traditional image extract methods.

Biometrics is the most emerging technology for automatic people authentication, nevertheless severe concerns raised about security of such systems and users' privacy. In case of malicious attacks toward one or more components of the authentication system, stolen biometric features cannot be replaced. This paper focuses on securing the enrollment database and the communication channel between such database and the matcher. In particular, a method is developed to protect the stored biometric templates, adapting the fuzzy commitment scheme to iris biometrics by exploiting error correction codes tailored on template discriminability. The aforementioned method allows template renewability applied to iris based authentication and guarantees high security performing the match in the encrypted domain.

To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

The purpose of this Analysis/Model Report (AMR) is to document the analyses that were done to develop models for radionuclide release from high-level waste (HLW) glass dissolution that can be integrated into performance assessment (PA) calculations conducted to support site recommendation and license application for the Yucca Mountain site. This report was developed in accordance with the ''Technical Work Plan for Waste Form Degradation Process Model Report for SR'' (CRWMS M&O 2000a). It specifically addresses the item, ''Defense HighLevel Waste Glass Degradation'', of the product technical work plan. The AP-3.15Q Attachment 1 screening criteria determines the importance for its intended use of the HLW glass model derived herein to be in the category ''Other Factors for the Postclosure Safety Case-Waste Form Performance'', and thus indicates that this factor does not contribute significantly to the postclosure safety strategy. Because the release of radionuclides from the glass will depend on the prior dissolution of the glass, the dissolution rate of the glass imposes an upper bound on the radionuclide release rate. The approach taken to provide a bound for the radionuclide release is to develop models that can be used to calculate the dissolution rate of waste glass when contacted by water in the disposal site. The release rate of a particular radionuclide can then be calculated by multiplying the glass dissolution rate by the mass fraction of that radionuclide in the glass and by the surface area of glass contacted by water. The scope includes consideration of the three modes by which water may contact waste glass in the disposal system: contact by humid air, dripping water, and immersion. The models for glass dissolution under these contact modes are all based on the rate expression for aqueous dissolution of borosilicate glasses. The mechanism and rate expression for aqueous dissolution are adequately understood; the analyses in this AMR were conducted to

One of the main tracking detectors of the forthcoming ALICE Experiment at the LHC is a cylindrical Time Projection Chamber (TPC) with an expected data volume of about 75 MByte per event. This data volume, in combination with the presumed maximum bandwidth of 1.2 GByte/s to the mass storage system, would limit the maximum event rate to 20 Hz. In order to achieve higher event rates, online data processing has to be applied. This implies either the detection and read-out of only those events which contain interesting physical signatures or an efficient compression of the data by modeling techniques. In order to cope with the anticipated data rate, massive parallel computing power is required. It will be provided in form of a clustered farm of SMP-nodes, based on off-the-shelf PCs, which are connected with a high bandwidth low overhead network. This High-Level Trigger (HLT) will be able to process a data rate of 25 GByte/s online. The front-end electronics of the individual sub-detectors is connected to the HLT via an optical link and a custom PCI card which is mounted in the clustered PCs. The PCI card is equipped with an FPGA necessary for the implementation of the PCI-bus protocol. Therefore, this FPGA can also be used to assist the host processor with first-level processing. The first-level processing done on the FPGA includes conventional cluster-finding for low multiplicity events and local track finding based on the Hough Transformation of the raw data for high multiplicity events. PACS: 07.05.-t Computers in experimental physics - 07.05.Hd Data acquisition: hardware and software - 29.85.+c Computer data analysis

Matched-field processing is a new technique for processing ocean acoustic data measured by an array of hydrophones. It produces estimates of the location of sources of acoustic energy. This method differs from source localization techniques in other disciplines in that it uses the complex underwater acoustic environment to improve the accuracy of the source localization. An unexplored problem in matched-field processing has been to separate multiple sources within a matched-field ambiguity function. Underwater acoustic processing is one of many disciplines where a synthesis of computer graphics and image processing is producing new insight. The benefits of different volume visualization algorithms for matched-field display are discussed. The authors show how this led to a templatematching scheme for identifying a source within the matched-field ambiguity function that can help move toward an automated source localization process.

Despite lacrosse being one of the fastest growing team sports in the world, there is a paucity of information detailing the activity profile of high-level players. Microtechnology systems (global positioning systems and accelerometers) provide the opportunity to obtain detailed information on the activity profile in lacrosse. Therefore, this study aimed to analyze the activity profile of lacrosse match-play using microtechnology. Activity profile variables assessed relative to minutes of playing time included relative distance (meter per minute), distance spent standing (0-0.1 m·min), walking (0.2-1.7 m·min), jogging (1.8-3.2 m·min), running (3.3-5.6 m·min), sprinting (≥5.7 m·min), number of high, moderate, low accelerations and decelerations, and player load (PL per minute), calculated as the square root of the sum of the squared instantaneous rate of change in acceleration in 3 vectors (medio-lateral, anterior-posterior, and vertical). Activity was recorded from 14 lacrosse players over 4 matches during a national tournament. Players were separated into positions of attack, midfield, or defense. Differences (effect size [ES] ± 90% confidence interval) between positions and periods of play were considered likely positive when there was ≥75% likelihood of the difference exceeding an ES threshold of 0.2. Midfielders had likely covered higher (mean ± SD) meters per minute (100 ± 11) compared with attackers (87 ± 14; ES = 0.89 ± 1.04) and defenders (79 ± 14; ES = 1.54 ± 0.94) and more moderate and high accelerations and decelerations. Almost all variables across positions were reduced in quarter 4 compared with quarter 1. Coaches should accommodate for positional differences when preparing lacrosse players for competition. PMID:25264672

Cholesteric blue phases of a chiral liquid crystal are interesting examples of self-organised three-dimensional nanostructures formed by soft matter. Recently it was demonstrated that a polymer matrix introduced by photopolymerization inside a bulk blue phase not only stabilises the host blue phase significantly, but also serves as a template for blue phase ordering. We show with numerical modelling that the transfer of the orientational order of the blue phase to the surfaces of the polymer matrix, together with the resulting surface anchoring, can account for the templating behaviour of the polymer matrix inducing the blue phase ordering of an achiral nematic liquid crystal. Furthermore, tailoring the anchoring conditions of the polymer matrix surfaces can bring about orientational ordering different from those of bulk blue phases, including an intertwined complex of the polymer matrix and topological line defects of orientational order. Optical Kerr response of templated blue phases is explored, finding large Kerr constants in the range of K = 2-10 × 10(-9) m V(-2) and notable dependence on the surface anchoring strength. More generally, the presented numerical approach is aimed to clarify the role and actions of templating polymer matrices in complex chiral nematic fluids, and further to help design novel template-based materials from chiral liquid crystals. PMID:26412643

We present a novel approach for fabricating celloidosomes, which represent a hollow and spherical three-dimensional self-assembly of living cells encapsulating an aqueous core. Glass- capillary microfluidics is used to generate monodisperse water-in-oil-in-water double emulsion templates using lipids as stabilizers. Such templates allow for obtaining single but also double concentric celloidosomes. In addition, after a solvent removal step the double emulsion templates turn into monodisperse lipid vesicles, whose membrane spontaneously phase separates when choosing the adequate lipid composition, providing the adequate scaffold for fabricating Janus-celloidosomes. These structures may find applications in the development of bioreactors in which the synergistic effects of two different types of cells selectively adsorbed on one of the vesicle hemispheres may be exploited.

Molecularly imprinted polymers can be created by crosslinking polymers in the presence of molecular templates. If the pores generated after the removing of templates have almost the same size and shape as the template, the material has a potential to be used for separation, biosensor and drug delivery applications. In this work, micelles were used as the template as they can be easily removed from the hydrogel and a range of structures are accessible by combining a (linear) polyelectrolyte and an oppositely charged surfactant. Poly(2-(dimethylamino)ethyl methacrylate) (PDMAEMA) was synthesized and quaternized using methyl iodide. We have performed small angle neutron scattering (SANS) on solutions and hydrogels of PDMAEMA with sodium dodecylsulfate (SDS) under different contrast matching conditions. A structured hydrogel was then formed by chemically crosslinking the semi-dilute PDMAEMA solution which contained SDS. It was confirmed that spherical micelle-like structures were associated along the polymer chain in a bead-and-necklace structure consistent with what has been observed in the (uncharged) poly(ethylene oxide)/SDS system. Furthermore, it was shown that the interaction between PDMAEMA and micelles is strong enough to maintain the nanoscale structure formed along the PDMAEMA chain, even after crosslinking, leading to a structured hydrogel.

To what extent do we have shared or unique visual experiences? This paper examines how the answer to this question is constrained by known processes of visual adaptation. Adaptation constantly recalibrates visual sensitivity so that our vision is matched to the stimuli that we are currently exposed to. These processes normalize perception not only to low-level features in the image, but to high-level, biologically relevant properties of the visual world. They can therefore strongly impact many natural perceptual judgments. To the extent that observers are exposed to and thus adapted by a different environment, their vision will be normalized in different ways and their subjective visual experience will differ. These differences are illustrated by considering how adaptation can influence human face perception. To the extent that observers are exposed and adapted to common properties in the environment, their vision will be adjusted toward common states, and in this respect they will have a common visual experience. This is illustrated by reviewing the effects of adaptation on the perception of image blur. In either case, it is the similarities or differences in the stimuli - and not the intrinsic similarities or differences in the observers - which determine the relative states of adaptation. Thus at least some aspects of our private internal experience are controlled by external factors that are accessible to objective measurement.

A template levelling system is described for levelling a template on piles implanted in the floor of a body of water. The template has receptacles, comprising in combination: a pile receptacle carried in each template receptacle on a gimbal, the pile receptacle having a lower flange extending outwardly from and below the template receptacle; slip means located within each of the pile receptacles for gripping one of the piles to prevent downward movement of the template with respect to the piles; and hydraulic jack means for gripping the pile receptacle and pulling it and the slip means upwardly, causing the flange to contact the lower side of the template receptacle to lift the template to a level position.

α- and β-Cyclodextrins have been used as scaffolds for the synthesis of six- and seven-legged templates by functionalizing every primary CH2OH with a 4-pyridyl moiety. Although these templates are flexible, they are very effective for directing the synthesis of macrocyclic porphyrin oligomers consisting of six or seven porphyrin units. The transfer of chirality from the cyclodextrin templates to their nanoring hosts is evident from NMR and circular dichroism spectroscopy. Surprisingly, the mean effective molarity for binding the flexible α-cyclodextrin-based template within the six-porphyrin nanoring (74 m) is almost as high as for the previously studied rigid hexadentate template (180 m). The discovery that flexible templates are effective in this system, and the availability of a template with a prime number of binding sites, open up many possibilities for the template-directed synthesis of larger macrocycles. PMID:24916813

α- and β-cyclodextrins have been used as scaffolds for the synthesis of six- and seven-legged templates by functionalizing every primary CH2OH with a 4-pyridyl moiety. Although these templates are flexible, they are very effective for directing the synthesis of macrocyclic porphyrin oligomers consisting of six or seven porphyrin units. The transfer of chirality from the cyclodextrin templates to their nanoring hosts is evident from NMR and circular dichroism spectroscopy. Surprisingly, the mean effective molarity for binding the flexible α-cyclodextrin-based template within the six-porphyrin nanoring (74 M) is almost as high as for the previously studied rigid hexadentate template (180 M). The discovery that flexible templates are effective in this system, and the availability of a template with a prime number of binding sites, open up many possibilities for the template-directed synthesis of larger macrocycles. PMID:24916813

Image matching is a common procedure in computer vision. Usually the size of the image template is fixed. If the matching is done repeatedly, as e.g. in stereo vision, object tracking, and strain measurements, it is beneficial, in terms of computational cost, to use as small templates as possible. On the other hand larger templates usually give more reliable matches, unless e.g. projective distortions become too great. If the template size is controlled locally dynamically, both computational efficiency and reliability can be achieved simultaneously. Adaptive template size requires though that a larger template can be sampled anytime. This paper introduces a method to adaptively control the template size in a digital image correlation based strain measurement algorithm. The control inputs are measures of confidence of match. Some new measures are proposed in this paper, and the ones found in the literature are reviewed. The measures of confidence are tested and compared with each other as well as with a reference method using templates of fixed size. The comparison is done with respect to computational complexity and accuracy of the algorithm. Due to complex inter-actions of the free parameters of the algorithm, random search is used to find an optimal parameter combination to attain a more reliable comparison. The results show that with some confidence measures the dynamic scheme outperforms the static reference method. However, in order to benefit from the dynamic scheme, optimization of the parameters is needed.

Template-guided recombination (TGR) is a model for the rearrangement of genomic DNA that takes place in some ciliated protozoa. Originally proposed as a formal model, TGR has been investigated both as a realistic model for genome rearrangement in ciliates and, due to interest in the potential of ciliates as “in vivo computers”, in terms of its computational power. TGR was put forward as a biological hypothesis that certain types of DNA rearrangements in ciliates are primarily controlled by a process of template-matching, where new genes are generated by using old genes as templates. Most significantly, it has recently been experimentally established that gene rearrangement in the stichotrichous ciliate Oxytricha trifallax (Sterkiella histriomuscorum) proceeds in a template-guided fashion. This survey describes recent work on TGR as a biological process and the computational properties of the formal model of TGR.

Human action recognition is an active and interesting research topic in computer vision and pattern recognition field that is widely used in the real world. We proposed an approach for human activity analysis based on motion energy template (MET), a new high-level representation of video. The main idea for the MET model is that human actions could be expressed as the composition of motion energy acquired in a three-dimensional (3-D) space-time volume by using a filter bank. The motion energies were directly computed from raw video sequences, thus some problems, such as object location and segmentation, etc., are definitely avoided. Another important competitive merit of this MET method is its insensitivity to gender, hair, and clothing. We extract MET features by using the Bhattacharyya coefficient to measure the motion energy similarity between the action template video and the tested video, and then the 3-D max-pooling. Using these features as input to the support vector machine, extensive experiments on two benchmark datasets, Weizmann and KTH, were carried out. Compared with other state-of-the-art approaches, such as variation energy image, dynamic templates and local motion pattern descriptors, the experimental results demonstrate that our MET model is competitive and promising.

Suppose that three kinds of quantum systems are given in some unknown states |f>⊗N, |g1>⊗K, and |g2>⊗K, and we want to decide which template state |g1> or |g2>, each representing the feature of the pattern class C1 or C2, respectively, is closest to the input feature state |f>. This is an extension of the pattern matching problem into the quantum domain. Assuming that these states are known a priori to belong to a certain parametric family of pure qubit systems, we derive two kinds of matching strategies. The first one is a semiclassical strategy that is obtained by the natural extension of conventional matching strategies and consists of a two-stage procedure: identification (estimation) of the unknown template states to design the classifier (learning process to train the classifier) and classification of the input system into the appropriate pattern class based on the estimated results. The other is a fully quantum strategy without any intermediate measurement, which we might call as the universal quantum matching machine. We present the Bayes optimal solutions for both strategies in the case of K=1, showing that there certainly exists a fully quantum matching procedure that is strictly superior to the straightforward semiclassical extension of the conventional matching strategy based on the learning process.

Molecular templates bind particular reactants, thereby increasing their effective concentrations and accelerating the corresponding reaction. This concept has been successfully applied to a number of chemical problems with a strong focus on nucleic acid templated reactions. We present the first protein-templated reaction that allows N-terminal linkage of two peptides. In the presence of a protein template, ligation reactions were accelerated by more than three orders of magnitude. The templated reaction is highly selective and proved its robustness in a protein-labeling reaction that was performed in crude cell lysate. PMID:24644125

Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With

Security policies have different components; firewall, active directory, and IDS are some examples of these components. Enforcement of network security policies to low level security mechanisms faces some essential difficulties. Consistency, verification, and maintenance are the major ones of these difficulties. One approach to overcome these difficulties is to automate the process of translation of highlevel security policy into low level security mechanisms. This paper introduces a framework of an automation process that translates a highlevel security policy into low level security mechanisms. The framework is described in terms of three phases; in the first phase all network assets are categorized according to their roles in the network security and relations between them are identified to constitute the network security model. This proposed model is based on organization based access control (OrBAC). However, the proposed model extend the OrBAC model to include not only access control policy but also some other administrative security policies like auditing policy. Besides, the proposed model enables matching of each rule of the highlevel security policy with the corresponding ones of the low level security policy. Through the second phase of the proposed framework, the highlevel security policy is mapped into the network security model. The second phase could be considered as a translation of the highlevel security policy into an intermediate model level. Finally, the intermediate model level is translated automatically into low level security mechanism. The paper illustrates the applicability of proposed approach through an application example.

... 46 Shipping 4 2011-10-01 2011-10-01 false Bilge highlevel alarms. 119.530 Section 119.530... Bilge and Ballast Systems § 119.530 Bilge highlevel alarms. (a) Each vessel must be provided with a visual and audible alarm at the operating station to indicate a high water level in each of the...

... 46 Shipping 4 2012-10-01 2012-10-01 false Bilge highlevel alarms. 119.530 Section 119.530... Bilge and Ballast Systems § 119.530 Bilge highlevel alarms. (a) Each vessel must be provided with a visual and audible alarm at the operating station to indicate a high water level in each of the...

... 46 Shipping 4 2014-10-01 2014-10-01 false Bilge highlevel alarms. 119.530 Section 119.530... Bilge and Ballast Systems § 119.530 Bilge highlevel alarms. (a) Each vessel must be provided with a visual and audible alarm at the operating station to indicate a high water level in each of the...

... 46 Shipping 4 2013-10-01 2013-10-01 false Bilge highlevel alarms. 119.530 Section 119.530... Bilge and Ballast Systems § 119.530 Bilge highlevel alarms. (a) Each vessel must be provided with a visual and audible alarm at the operating station to indicate a high water level in each of the...

... 46 Shipping 4 2010-10-01 2010-10-01 false Bilge highlevel alarms. 119.530 Section 119.530... Bilge and Ballast Systems § 119.530 Bilge highlevel alarms. (a) Each vessel must be provided with a visual and audible alarm at the operating station to indicate a high water level in each of the...

The addition of a small amount of reducing agent to a mixture of a high-level radioactive waste calcine and glass frit before the mixture is melted will produce a more homogeneous glass which is leach-resistant and suitable for long-term storage of high-level radioactive waste products.

A polymer-assisted deposition process for deposition of epitaxial cubic metal nitride films and the like is presented. The process includes solutions of one or more metal precursor and soluble polymers having binding properties for the one or more metal precursor. After a coating operation, the resultant coating is heated at high temperatures under a suitable atmosphere to yield metal nitride films and the like. Such films can be used as templates for the development of high quality cubic GaN based electronic devices.

Background Structural genomics projects such as the Protein Structure Initiative (PSI) yield many new structures, but often these have no known molecular functions. One approach to recover this information is to use 3D templates – structure-function motifs that consist of a few functionally critical amino acids and may suggest functional similarity when geometrically matched to other structures. Since experimentally determined functional sites are not common enough to define 3D templates on a large scale, this work tests a computational strategy to select relevant residues for 3D templates. Results Based on evolutionary information and heuristics, an Evolutionary Trace Annotation (ETA) pipeline built templates for 98 enzymes, half taken from the PSI, and sought matches in a non-redundant structure database. On average each templatematched 2.7 distinct proteins, of which 2.0 share the first three Enzyme Commission digits as the template's enzyme of origin. In many cases (61%) a single most likely function could be predicted as the annotation with the most matches, and in these cases such a plurality vote identified the correct function with 87% accuracy. ETA was also found to be complementary to sequence homology-based annotations. When matches are required to both geometrically match the 3D template and to be sequence homologs found by BLAST or PSI-BLAST, the annotation accuracy is greater than either method alone, especially in the region of lower sequence identity where homology-based annotations are least reliable. Conclusion These data suggest that knowledge of evolutionarily important residues improves functional annotation among distant enzyme homologs. Since, unlike other 3D template approaches, the ETA method bypasses the need for experimental knowledge of the catalytic mechanism, it should prove a useful, large scale, and general adjunct to combine with other methods to decipher protein function in the structural proteome. PMID:18190718

Placing signal templates (grid points) as efficiently as possible to cover a multidimensional parameter space is crucial in computing-intensive matched-filtering searches for gravitational waves, but also in similar searches in other fields of astronomy. To generate efficient coverings of arbitrary parameter spaces, stochastic template banks have been advocated, where templates are placed at random while rejecting those too close to others. However, in this simple scheme, for each new random point its distance to every template in the existing bank is computed. This rapidly increasing number of distance computations can render the acceptance of new templates computationally prohibitive, particularly for wide parameter spaces or in large dimensions. This paper presents a neighboring cell algorithm that can dramatically improve the efficiency of constructing a stochastic template bank. By dividing the parameter space into subvolumes (cells), for an arbitrary point an efficient hashing technique is exploited to obtain the index of its enclosing cell along with the parameters of its neighboring templates. Hence only distances to these neighboring templates in the bank are computed, massively lowering the overall computing cost, as demonstrated in simple examples. Furthermore, we propose a novel method based on this technique to increase the fraction of covered parameter space solely by directed template shifts, without adding any templates. As is demonstrated in examples, this method can be highly effective.

This report presents technical data and performance characteristics of a high-level waste glass and canister intended for use in the design of a complete waste encapsulation package suitable for disposal in a geologic repository. The borosilicate glass contained in the stainless steel canister represents the probable type of high-level waste product that will be produced in a commercial nuclear-fuel reprocessing plant. Development history is summarized for high-level liquid waste compositions, waste glass composition and characteristics, and canister design. The decay histories of the fission products and actinides (plus daughters) calculated by the ORIGEN-II code are presented.

This article is intended to update the reader on the progress made on insect embryo cryopreservation in the past 20 years and gives information for developing a protocol for cryopreserving insects by using a 2001 study as a template. The study used for the template is the cryopreservation of the Old...

This papers deals with the optimization of the experimental conditions for the estimation of {sup 237}Np in spent-fuel dissolver/high-level waste solutions using thenoyltrifluoroacetone as the extractant. (authors)

A 20 month operating experimental program was conducted at Marlborough, Massachusetts to evaluate the feasibility, engineering, and economic aspects of achieving highlevels of effluent disinfection with ozone. The ozone research pilot facility was designed to operate at a consta...

This document establishes the combination of design and operational configurations that will be used to provide heat removal from high-level waste tanks during Phase 1 waste feed delivery to prevent the waste temperature from exceeding tank safety requirement limits. The chosen method--to use the primary and annulus ventilation systems to remove heat from the high-level waste tanks--is documented herein.

The High-Level Waste System is a set of six different processes interconnected by pipelines. These processes function as one large treatment plant that receives, stores, and treats high-level wastes from various generators at SRS and converts them into forms suitable for final disposal. The three major forms are borosilicate glass, which will be eventually disposed of in a Federal Repository, Saltstone to be buried on site, and treated water effluent that is released to the environment.

A novel feedback-based spike detection algorithm for noisy spike trains is presented in this paper. It uses the information extracted from the results of spike classification for the enhancement of spike detection. The algorithm performs templatematching for spike detection by a normalized correlator. The detected spikes are then sorted by the OSortalgorithm. The mean of spikes of each cluster produced by the OSort algorithm is used as the template of the normalized correlator for subsequent detection. The automatic generation and updating of templates enhance the robustness of the spike detection to input trains with various spike waveforms and noise levels. Experimental results show that the proposed algorithm operating in conjunction with OSort is an efficient design for attaining high detection and classification accuracy for spike sorting. PMID:24960082

We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

The LAMOST spectral analysis pipeline, called the 1D pipeline, aims to classify and measure the spectra observed in the LAMOST survey. Through this pipeline, the observed stellar spectra are classified into different subclasses by matching with template spectra. Consequently, the performance of the stellar classification greatly depends on the quality of the template spectra. In this paper, we construct a new LAMOST stellar spectral classification template library, which is supposed to improve the precision and credibility of the present LAMOST stellar classification. About one million spectra are selected from LAMOST Data Release One to construct the new stellar templates, and they are gathered in 233 groups by two criteria: (1) pseudo g – r colors obtained by convolving the LAMOST spectra with the Sloan Digital Sky Survey ugriz filter response curve, and (2) the stellar subclass given by the LAMOST pipeline. In each group, the template spectra are constructed using three steps. (1) Outliers are excluded using the Local Outlier Probabilities algorithm, and then the principal component analysis method is applied to the remaining spectra of each group. About 5% of the one million spectra are ruled out as outliers. (2) All remaining spectra are reconstructed using the first principal components of each group. (3) The weighted average spectrum is used as the template spectrum in each group. Using the previous 3 steps, we initially obtain 216 stellar template spectra. We visually inspect all template spectra, and 29 spectra are abandoned due to low spectral quality. Furthermore, the MK classification for the remaining 187 template spectra is manually determined by comparing with 3 template libraries. Meanwhile, 10 template spectra whose subclass is difficult to determine are abandoned. Finally, we obtain a new template library containing 183 LAMOST template spectra with 61 different MK classes by combining it with the current library.

The key event in the pathogenesis of the transmissible spongiform encephalopathies is a template-dependent misfolding event where an infectious isoform of the prion protein (PrPSc) comes into contact with native prion protein (PrPC) and changes its conformation to PrPSc. In many extraneurally inoculated models of prion disease this PrPC misfolding event occurs in lymphoid tissues prior to neuroinvasion. The primary objective of this study was to compare levels of total PrPC in hamster lymphoid tissues involved in the early pathogenesis of prion disease. Lymphoid tissues were collected from golden Syrian hamsters and Western blot analysis was performed to quantify PrPC levels. PrPC immunohistochemistry (IHC) of paraffin embedded tissue sections was performed to identify PrPC distribution in tissues of the lymphoreticular system. Nasal associated lymphoid tissue contained the highest amount of total PrPC followed by Peyer’s patches, mesenteric and submandibular lymph nodes, and spleen. The relative levels of PrPC expression in IHC processed tissue correlated strongly with the Western blot data, with highlevels of PrPC corresponding with a higher percentage of PrPC positive B cell follicles. Highlevels of PrPC in lymphoid tissues closely associated with the nasal cavity could contribute to the relative increased efficiency of the nasal route of entry of prions, compared to other routes of infection. PMID:25642714

In most template directed preparative methods, while the template decides the nanostructure morphology, the structure of the template itself is a non-general outcome of its peculiar chemistry. Here we demonstrate a template mediated synthesis that overcomes this deficiency. This synthesis involves overgrowth of silica template onto a sacrificial nanocrystal. Such templates are used to copy the morphologies of gold nanorods. After template overgrowth, gold is removed and silver is regrown in the template cavity to produce a single crystal silver nanorod. This technique allows for duplicating existing nanocrystals, while also providing a quantifiable breakdown of the structure - shape interdependence.

For a long time, tracking IR point targets is a great challenge task. We propose a tracking framework based on templatematching combined with Kalman prediction. Firstly, a novel templatematching method for detecting infrared point targets is presented. Different from the classic templatematching, the projection coefficients obtained from principal component analysis are used as templates and the non-linear correlation coefficient is used to measure the matching degree. The non-linear correlation can capture the higher-order statistics. So the detection performance is improved greatly. Secondly, a framework of tracking point targets, based on the proposed detection method and Kalman prediction, is developed. Kalman prediction reduces the searching region for the detection method and, in turn, the detection method provides the more precise measurement for Kalman prediction. They bring out the best in each other. Results of experiments show that this framework is competent to track infrared point targets.

Twenty five glasses were formulated. They were batched from HLW AZ-101 simulant or raw chemicals and melted and tested with a series of tests to elucidate the effect of spinel-forming components (Ni, Fe, Cr, Mn, and Zn), Al, and noble metals (Rh2O3 and RuO2) on the accumulation rate of spinel crystals in the glass discharge riser of the high-level waste (HLW) melter. In addition, the processing properties of glasses, such as the viscosity and TL, were measured as a function of temperature and composition. Furthermore, the settling of spinel crystals in transparent low-viscosity fluids was studied at room temperature to access the shape factor and hindered settling coefficient of spinel crystals in the Stokes equation. The experimental results suggest that Ni is the most troublesome component of all the studied spinel-forming components producing settling layers of up to 10.5 mm in just 20 days in Ni-rich glasses if noble metals or a higher concentration of Fe was not introduced in the glass. The layer of this thickness can potentially plug the bottom of the riser, preventing glass from being discharged from the melter. The noble metals, Fe, and Al were the components that significantly slowed down or stopped the accumulation of spinel at the bottom. Particles of Rh2O3 and RuO2, hematite and nepheline, acted as nucleation sites significantly increasing the number of crystals and therefore decreasing the average crystal size. The settling rate of ≤10-μm crystal size around the settling velocity of crystals was too low to produce thick layers. The experimental data for the thickness of settled layers in the glasses prepared from AZ-101 simulant were used to build a linear empirical model that can predict crystal accumulation in the riser of the melter as a function of concentration of spinel-forming components in glass. The developed model predicts the thicknesses of accumulated layers quite well, R2 = 0.985, and can be become an efficient tool for the formulation

This paper presents a novel probabilistic approach to hierarchical, exemplar-based shape matching. No feature correspondence is needed among exemplars, just a suitable pairwise similarity measure. The approach uses a template tree to efficiently represent and match the variety of shape exemplars. The tree is generated offline by a bottom-up clustering approach using stochastic optimization. Online matching involves a simultaneous coarse-to-fine approach over the template tree and over the transformation parameters. The main contribution of this paper is a Bayesian model to estimate the a posteriori probability of the object class, after a certain match at a node of the tree. This model takes into account object scale and saliency and allows for a principled setting of the matching thresholds such that unpromising paths in the tree traversal process are eliminated early on. The proposed approach was tested in a variety of application domains. Here, results are presented on one of the more challenging domains: real-time pedestrian detection from a moving vehicle. A significant speed-up is obtained when comparing the proposed probabilistic matching approach with a manually tuned nonprobabilistic variant, both utilizing the same template tree structure. PMID:17568144

The core concept within the field of brain mapping is the use of a standardized, or "stereotaxic", 3D coordinate frame for data analysis and reporting of findings from neuroimaging experiments. This simple construct allows brain researchers to combine data from many subjects such that group-averaged signals, be they structural or functional, can be detected above the background noise that would swamp subtle signals from any single subject. Where the signal is robust enough to be detected in individuals, it allows for the exploration of inter-individual variance in the location of that signal. From a larger perspective, it provides a powerful medium for comparison and/or combination of brain mapping findings from different imaging modalities and laboratories around the world. Finally, it provides a framework for the creation of large-scale neuroimaging databases or "atlases" that capture the population mean and variance in anatomical or physiological metrics as a function of age or disease. However, while the above benefits are not in question at first order, there are a number of conceptual and practical challenges that introduce second-order incompatibilities among experimental data. Stereotaxic mapping requires two basic components: (i) the specification of the 3D stereotaxic coordinate space, and (ii) a mapping function that transforms a 3D brain image from "native" space, i.e. the coordinate frame of the scanner at data acquisition, to that stereotaxic space. The first component is usually expressed by the choice of a representative 3D MR image that serves as target "template" or atlas. The native image is re-sampled from native to stereotaxic space under the mapping function that may have few or many degrees of freedom, depending upon the experimental design. The optimal choice of atlas template and mapping function depend upon considerations of age, gender, hemispheric asymmetry, anatomical correspondence, spatial normalization methodology and disease

Very HighLevel design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very highlevel design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very highlevel design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.

The primary purpose of this document is to describe the overall process control strategy for monitoring and controlling the functions associated with the Phase 1B high-level waste feed delivery. This document provides the basis for process monitoring and control functions and requirements needed throughput the double-shell tank system during Phase 1 high-level waste feed delivery. This document is intended to be used by (1) the developers of the future Process Control Plan and (2) the developers of the monitoring and control system.

A solution of launching high-level nuclear waste into space is suggested. Disposal in space includes solidifying the wastes, embedding them in an explosion-proof vehicle, and launching it into earth orbit, and then into a solar orbit. The benefits of such a system include not only the safe disposal of high-level waste but also the establishment of an infrastructure for large-scale space exploration and development. Particular attention is given to the wide range of technical choices along with the societal, economic, and political factors needed for success.

Failure to perform proper disinfection and sterilization of medical devices may lead to introduction of pathogens, resulting in infection. New techniques have been developed for achieving high-level disinfection and adequate environmental cleanliness. This article examines new technologies for sterilization and high-level disinfection of critical and semicritical items, respectively, and because semicritical items carry the greatest risk of infection, the authors discuss reprocessing semicritical items such as endoscopes and automated endoscope reprocessors, endocavitary probes, prostate biopsy probes, tonometers, laryngoscopes, and infrared coagulation devices. In addition, current issues and practices associated with environmental cleaning are reviewed. PMID:21315994

This paper discusses the engineering systems for the structural design of the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS). At the DWPF, highlevel radioactive liquids will be mixed with glass particles and heated in a melter. This molten glass will then be poured into stainless steel canisters where it will harden. This process will transform the highlevel waste into a more stable, manageable substance. This paper discuss the structural design requirements for this unique one of a kind facility. A special emphasis will be concentrated on the design criteria pertaining to earthquake, wind and tornado, and flooding.

ARTEMIS is an online accelerator modeling server developed at CEBAF. One of the design goals of ARTEMIS was to provide an integrated modeling environment for high- level accelerator diagnostic and control applications such as automated beam steering, Linac Energy management (LEM) and the fast feedback system. This report illustrates the use of ARTEMIS in these applications as well as the application interface using the EPICS cdev device support API. Concentration is placed on the design and implementation aspects of high- level applications which utilize the ARTEMIS server for information on beam dynamics. Performance benchmarks for various model operations provided by ARTEMIS are also discussed.

Cermets are being developed as an alternate method for the fixation of defense and commercial highlevel radioactive waste in a terminal disposal form. Following initial feasibility assessments of this waste form, consisting of ceramic particles dispersed in an iron-nickel base alloy, significantly improved processing methods were developed. The characterization of cermets has continued through property determinations on samples prepared by various methods from a variety of simulated and actual high-level wastes. This report describes the status of development of the cermet waste form as it has evolved since 1977. 6 tables, 18 figures.

A method of growing carbon nanotubes uses a synthesized mesoporous si lica template with approximately cylindrical pores being formed there in. The surfaces of the pores are coated with a carbon nanotube precu rsor, and the template with the surfaces of the pores so-coated is th en heated until the carbon nanotube precursor in each pore is convert ed to a carbon nanotube.

The problem of finding meaningful subcircuits in a logic layout appears in many contexts in computer-aided design. Existing techniques rely upon finding exact matchings of subcircuit structure within the layout. These syntactic techniques fail to identify functionally equivalent subcircuits that are differently implemented, optimized, or otherwise obfuscated. The authors present a mechanism for identifying functionally equivalent subcircuits that can overcome many of these limitations. Such semantic matching is particularly useful in the field of design recovery.

The Horridge template model is an empirical motion detection model inspired by insect vision. This model has been successfully implemented on several micro-sensor VLSI chips using greyscale pixels. The template model is based on movement of detected edges rather than whole objects, which facilitate simple tracking techniques. Simple tracking algorithms developed by Nguyen have been successful in tracking coherent movement of objects in a simple environment. Due to the inherent edge detection nature of the template model, two closely spaced objects moving at the same speed relative to the template model sensor will appear to have a common edge and hence be interpreted as one object. Hence when the two objects separate, the tracking algorithm will be upset by the detection of two separate edges, resulting in a loss of tracking. This paper introduces a low-cost vision prototype, based on a color CMOS camera. Although this approach sacrifices auto gain control at each pixel, results are valid for controlled lighting conditions. We demonstrate working result, for indoor conditions, by extension of the template mole using the color CMOS sensor to form color templates. This enables the detection of color boundaries or edges of closely moving objects by exploiting the difference in color contrast between the objects. This paper also discusses the effectiveness of this technique in facilitating the independent tracking of multiple objects.

This paper describes the data acquisition and highlevel trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

There are currently about 1055 million curies of high-level waste with a thermal output of about 2950 kilowatts (KW) at four sites in the United States: West Valley Demonstration Project (WVDP), Savannah River Site (SRS), Hanford Site (HANF), and Idaho National Engineering Laboratory (INEL). These quantities are expected to increase to about 1200 million curies and 3570 kw by the end of year 2020. Under the Nuclear Waste Policy Act, this high-level waste must ultimately be disposed of in a geologic repository. Accordingly, canisters of high-level waste immobilized in borosilicate glass or glass-ceramic mixtures are to be produced at the four sites and stored there until a repository becomes available. Data on the estimated production schedules and on the physical, chemical, and radiological characteristics of the canisters of immobilized high-level waste have been collected in OCRWM's Waste Characteristics Data Base, including recent updates an revisions. Comparisons of some of these data for the four sites are presented in this report. 14 refs., 3 tabs.

Flammable gases can be generated in DOE high-level waste tanks, including radiolytic hydrogen, and during cesium precipitation from salt solutions, benzene. Under normal operating conditions the potential for deflagration or detonation from these gases is precluded by purging and...

XAL is a Java programming framework for building high-level control applications related to accelerator physics. The structure, details of implementation, and interaction between components, auxiliary XAL packages, and the latest modifications are discussed. A general overview of XAL applications created for the SNS project is presented.

The objective of this document is to provide information on available issued documents that will assist interested parties in finding available data on high-level waste and transuranic waste feed compositions, properties, behavior in candidate processing operations, and behavior on candidate product glasses made from those wastes. This initial compilation is only a partial list of available references.

Standard, common electric typewriters are not completely suited to the needs of a high-level quadriplegic typing with a mouthstick. Experiences show that for complete control of a typewriter a mouthstick user needs the combined features of one-button correction, electric forward and reverse indexing, and easy character viewing. To modify a…

In its policies related to high-level manpower, the Tanzanian Government attaches great importance to the university, viewing it as a key institution in its policies for national development. Describes the difficulties the administration of President Nyerere has had in using the university as a political tool and analyzes various instances of…

Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least-squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real dataset is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO. PMID:26692592

To expedite the intelligence collection process, analysts reuse previously collected data. This poses the risk of analysis failure, because these data are biased in ways that the analyst may not know. Thus, these data may be incomplete, inconsistent or incorrect, have structural gaps and limitations, or simply be too old to accurately represent the current state of the world. Incorporating human-generated intelligence within the high-level fusion process enables the integration of hard (physical sensors) and soft information (human observations) to extend the ability of algorithms to associate and merge disparate pieces of information for a more holistic situational awareness picture. However, in order for high-level fusion systems to manage the uncertainty in soft information, a process needs to be developed for characterizing the sources of error and bias specific to human-generated intelligence and assessing the quality of this data. This paper outlines an approach Towards Integration of Data for unBiased Intelligence and Trust (TID-BIT) that implements a novel Hierarchical Bayesian Model for high-level situation modeling that allows the analyst to accurately reuse existing data collected for different intelligence requirements. TID-BIT constructs situational, semantic knowledge graphs that links the information extracted from unstructured sources to intelligence requirements and performs pattern matching over these attributed-network graphs for integrating information. By quantifying the reliability and credibility of human sources, TID-BIT enables the ability to estimate and account for uncertainty and bias that impact the high-level fusion process, resulting in improved situational awareness.

Over the past 10 years, the Hanford Site has been transitioning from nuclear materials production to Site cleanup operations. High-level waste characterization at the Hanford Site provides data to support present waste processing operations, tank safety programs, and future waste disposal programs. Quality elements in the high-level waste characterization program will be presented by following a sample through the data quality objective, sampling, laboratory analysis and data review process. Transition from production to cleanup has resulted in changes in quality systems and program; the changes, as well as other issues in these quality programs, will be described. Laboratory assessment through quality control and performance evaluation programs will be described, and data assessments in the laboratory and final reporting in the tank characterization reports will be discussed.

Seven candidate waste forms being developed under the direction of the Department of Energy's National High-Level Waste (HLW) Technology Program, were evaluated as potential media for the immobilization and geologic disposal of high-level nuclear wastes. The evaluation combined preliminary waste form evaluations conducted at DOE defense waste-sites and independent laboratories, peer review assessments, a product performance evaluation, and a processability analysis. Based on the combined results of these four inputs, two of the seven forms, borosilicate glass and a titanate based ceramic, SYNROC, were selected as the reference and alternative forms for continued development and evaluation in the National HLW Program. Both the glass and ceramic forms are viable candidates for use at each of the DOE defense waste-sites; they are also potential candidates for immobilization of commercial reprocessing wastes. This report describes the waste form screening process, and discusses each of the four major inputs considered in the selection of the two forms.

Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification.

Within this paper we present a simplified analytical model to provide insight into the key performance measures of a generic disposal system for highlevel waste within a geological disposal facility. The model assumes a low solubility waste matrix within a corrosion resistant disposal container surrounded by a low permeability buffer. Radionuclides migrate from the disposal area through a porous geosphere to the biosphere and give a radiological dose to a receptor. The system of equations describing the migration is transformed into Laplace space and an approximation used to determine peak values for the radionuclide mass transfer rate entering the biosphere. Results from the model are compared with those from more detailed numerical models for key radionuclides in the UK highlevel waste inventory. Such an insight model can provide a valuable second line of argument to assist in confirming the results of more detailed models and build confidence in the safety case for a geological disposal facility.

Highlevel radioactive wastes are being vitrified at the Savannah River Site for long term disposal. Many of the wastes contain sulfate at concentrations that can be difficult to retain in borosilicate glass. This study involves efforts to optimize the composition of a glass frit for combination with the waste to improve sulfate retention while meeting other process and product performance constraints. The fabrication and characterization of several series of simulated waste glasses are described. The experiments are detailed chronologically, to provide insight into part of the engineering studies used in developing frit compositions for an operating highlevel waste vitrification facility. The results lead to the recommendation of a specific frit composition and a concentration limit for sulfate in the glass for the next batch of sludge to be processed at Savannah River.

The TRANSCOM (transportation tracking and communication) system is the U.S. Department of Energy`s (DOE`s) real-time system for tracking shipments of spent fuel, high-level wastes, and other high-visibility shipments of radioactive material. The TRANSCOM system has been operational since 1988. The system was used during FY1993 to track almost 100 shipments within the US.DOE complex, and it is accessed weekly by 10 to 20 users.

Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

Plans for the nation's first high-level nuclear waste repository have called for permanently closing and sealing the repository soon after it is filled. However, the hydrologic environment of the proposed site at Yucca Mountain, Nevada, should allow the repository to be kept open and the waste retrievable indefinitely. This would allow direct monitoring of the repository and maintain the options for future generations to improve upon the disposal methods or use the uranium in the spent fuel as an energy resource.

The mixing processes in large, complex enclosures using one-dimensional differential equations, with transport in free and wall jets is modeled using standard integral techniques. With this goal in mind, we have constructed a simple, computationally efficient numerical tool, the Berkeley Mechanistic Mixing Model, which can be used to predict the transient evolution of fuel and oxygen concentrations in DOE high-level waste tanks following loss of ventilation, and validate the model against a series of experiments.

Alloys under consideration as candidates for the highlevel nuclear waste containers at Yucca Mountain were exposed to a range of corrosion conditions and their performance measured. The alloys tested were Incoloy 825, 70/30 Copper-Nickel, Monel 400, Hastelloy C- 22, and low carbon steel. The test conditions varied were: temperature, concentration, agitation, and crevice simulation. Only in the case of carbon steel was significant attack noted. This attack appeared to be transport limited.

The prior knowledge of the gravitational waveform from compact binary systems makes matched filtering an attractive detection strategy. This detection method involves the filtering of the detector output with a set of theoretical waveforms or templates. One of the most important factors in this strategy is knowing how many templates are needed in order to reduce the loss of possible signals. In this study, we calculate the number of templates and computational power needed for a one-step search for gravitational waves from inspiralling binary systems. We build on previous works by first expanding the post-Newtonian waveforms to 2.5-PN order and second, for the first time, calculating the number of templates needed when using P-approximant waveforms. The analysis is carried out for the four main first-generation interferometers, LIGO, GEO600, VIRGO and TAMA. As well as template number, we also calculate the computational cost of generating banks of templates for filtering GW data. We carry out the calculations for two initial conditions. In the first case we assume a minimum individual mass of 1 Msolar and in the second, we assume a minimum individual mass of 5 Msolar. We find that, in general, we need more P-approximant templates to carry out a search than if we use standard PN templates. This increase varies according to the order of PN-approximation, but can be as high as a factor of 3 and is explained by the smaller span of the P-approximant templates as we go to higher masses. The promising outcome is that for 2-PN templates, the increase is small and is outweighed by the known robustness of the 2-PN P-approximant templates.

e-Stars Template Builder is a computer program that implements a concept of enabling users to rapidly gain access to information on projects of NASA's Jet Propulsion Laboratory. The information about a given project is not stored in a data base, but rather, in a network that follows the project as it develops. e-Stars Template Builder resides on a server computer, using Practical Extraction and Reporting Language (PERL) scripts to create what are called "e-STARS node templates," which are software constructs that allow for project-specific configurations. The software resides on the server and does not require specific software on the user machine except for an Internet browser. A user's computer need not be equipped with special software (other than an Internet-browser program). e-Stars Template Builder is compatible with Windows, Macintosh, and UNIX operating systems. A user invokes e-Stars Template Builder from a browser window. Operations that can be performed by the user include the creation of child processes and the addition of links and descriptions of documentation to existing pages or nodes. By means of this addition of "child processes" of nodes, a network that reflects the development of a project is generated.

The High-Level Radioactive Waste Transportation Handbook serves as a reference to which state officials and members of the general public may turn for information on radioactive waste transportation and on the federal government`s system for transporting this waste under the Civilian Radioactive Waste Management Program. The Handbook condenses and updates information contained in the Midwestern High-Level Radioactive Waste Transportation Primer. It is intended primarily to assist legislators who, in the future, may be called upon to enact legislation pertaining to the transportation of radioactive waste through their jurisdictions. The Handbook is divided into two sections. The first section places the federal government`s program for transporting radioactive waste in context. It provides background information on nuclear waste production in the United States and traces the emergence of federal policy for disposing of radioactive waste. The second section covers the history of radioactive waste transportation; summarizes major pieces of legislation pertaining to the transportation of radioactive waste; and provides an overview of the radioactive waste transportation program developed by the US Department of Energy (DOE). To supplement this information, a summary of pertinent federal and state legislation and a glossary of terms are included as appendices, as is a list of publications produced by the Midwestern Office of The Council of State Governments (CSG-MW) as part of the Midwestern High-Level Radioactive Waste Transportation Project.

Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.

Storage of power reactor spent fuel is necessary at present because of the lack of reprocessing operations particularly in the U.S. By considering the above solidification and storage scenario, there is more than reasonable assurance that acceptable, stable, low heat generation rate, solidified waste can be produced, and safely disposed. The public perception of no waste disposal solutions is being exploited by detractors of nuclear power application. The inability to even point to one overall system demonstration lends credibility to the negative assertions. By delaying the gathering of on-line information to qualify repository sites, and to implement a demonstration, the actions of the nuclear power detractors are self serving in that they can continue to point out there is no demonstration of satisfactory high-level waste disposal. By maintaining the liquid and solidified high-level waste in secure above ground storage until acceptable decay heat generation rates are achieved, by producing a compatible, high integrity, solid waste form, by providing a second or even third barrier as a compound container and by inserting the enclosed waste form in a qualified repository with spacing to assure moderately low temperature disposal conditions, there appears to be no technical reason for not progressing further with the disposal of high-level wastes and needed implementation of the complete nuclear power fuel cycle.

With the increasing demand for the development of more nuclear power comes the responsibility to address the technical challenges of immobilizing high-level nuclear wastes in stable solid forms for interim storage or disposition in geologic repositories. The immobilization of high-level nuclear wastes has been an active area of research and development for over 50 years. Borosilicate glasses and complex ceramic composites have been developed to meet many technical challenges and current needs, although regulatory issues, which vary widely from country to country, have yet to be resolved. Cooperative international programs to develop advanced proliferation-resistant nuclear technologies to close the nuclear fuel cycle and increase the efficiency of nuclear energy production might create new separation waste streams that could demand new concepts and materials for nuclear waste immobilization. This article reviews the current state-of-the-art understanding regarding the materials science of glasses and ceramics for the immobilization of high-level nuclear waste and excess nuclear materials and discusses approaches to address new waste streams.

In order to understand how self-reproducing molecules could have originated on the primitive Earth or extraterrestrial bodies, it would be useful to find laboratory models of simple molecules which are able to carry out processes of catalysis and templating. Furthermore, it may be anticipated that systems in which several components are acting cooperatively to catalyze each other's synthesis will have different behavior with respect to natural selection than those of purely replicating systems. As the major focus of this work, laboratory models are devised to study the influence of short peptide catalysts on template reactions which produce oligonucleotides or additional peptides. Such catalysts could have been the earliest protoenzymes of selective advantage produced by replicating oligonucleotides. Since this is a complex problem, simpler systems are also studied which embody only one aspect at a time, such as peptide formation with and without a template, peptide catalysis of nontemplated peptide synthesis, and model reactions for replication of the type pioneered by Orgel.

Video images of laser beams are analyzed to determine the position of the laser beams for alignment purpose in the National Ignition Facility (NIF). Algorithms process beam images to facilitate automated laser alignment. One such beam image, known as the corner cube reflected pinhole image, exhibits wide beam quality variations that are processed by a matched-filter-based algorithm. The challenge is to design a representative template that captures these variations while at the same time assuring accurate position determination. This paper describes the development of a new analytical template to accurately estimate the center of a beam with good image quality. The templates are constructed to exploit several key recurring features observed in the beam images. When the beam image quality is low, the algorithm chooses a template that contains fewer features. The algorithm was implemented using a Xilinx Virtex II Pro FPGA implementation that provides a speedup of about 6.4 times over a baseline 3GHz Pentium 4 processor.

Objective Develop an improved method for auditing hospital cost and quality tailored to a specific hospital’s patient population. Data Sources/Setting Medicare claims in general, gynecologic and urologic surgery, and orthopedics from Illinois, New York, and Texas between 2004 and 2006. Study Design A template of 300 representative patients from a single index hospital was constructed and used to match 300 patients at 43 hospitals that had a minimum of 500 patients over a 3-year study period. Data Collection/Extraction Methods From each of 43 hospitals we chose 300 patients most resembling the template using multivariate matching. Principal Findings We found close matches on procedures and patient characteristics, far more balanced than would be expected in a randomized trial. There were little to no differences between the index hospital’s template and the 43 hospitals on most patient characteristics yet large and significant differences in mortality, failure-to-rescue, and cost. Conclusion Matching can produce fair, directly standardized audits. From the perspective of the index hospital, “hospital-specific” templatematching provides the fairness of direct standardization with the specific institutional relevance of indirect standardization. Using this approach, hospitals will be better able to examine their performance, and better determine why they are achieving the results they observe. PMID:25201167

The present invention relates to a thin film structure based on an epitaxial (111)-oriented rare earth-Group IVB oxide on the cubic (001) MgO terminated surface and the ion-beam-assisted deposition ("IBAD") techniques that are amendable to be over coated by semiconductors with hexagonal crystal structures. The IBAD magnesium oxide ("MgO") technology, in conjunction with certain template materials, is used to fabricate the desired thin film array. Similarly, IBAD MgO with appropriate template layers can be used for semiconductors with cubic type crystal structures.

In this paper, we propose an image processing scheme for moving object detection from a mobile robot with a single camera. It especially aims at intruder detection for the security robot on either smooth or uneven ground surfaces. The proposed scheme uses the templatematching with basis image reconstruction for the alignment between two consecutive images in the video sequence. The most representative template patches in one image are first automatically selected based on the gradient energies in the patches. The chosen templates then form a basis image matrix. A windowed subimage is constructed by the linear combination of the basis images, and the instances of the templates in the subsequent image are matched by evaluating their reconstruction error from the basis image matrix. For two well aligned images, a simple and fast temporal difference can thus be applied to identify moving objects from the background. The proposed templatematching can tolerate +/-10° in rotation and +/-10% in scaling. By adding templates with larger rotational angles in the basis image matrixes, the proposed method can match images from severe camera vibrations. The proposed scheme achieves a fast processing rate of 32 frames per second for images of size 160×120 pixels.

The Westinghouse Savannah River Technology Center was requested by it`s sister site, West Valley Nuclear Service (WVNS), to develop a remote inspection system to gather wall thickness readings of their HighLevel Waste Tanks. WVNS management chose to take a proactive approach to gain current information on two tanks t hat had been in service since the early 70`s. The tanks contain highlevel waste, are buried underground, and have only two access ports to an annular space between the tank and the secondary concrete vault. A specialized remote system was proposed to provide both a visual surveillance and ultrasonic thickness measurements of the tank walls. A magnetic wheeled crawler was the basis for the remote delivery system integrated with an off-the-shelf Ultrasonic Data Acquisition System. A development program was initiated for Savannah River Technology Center (SRTC) to design, fabricate, and test a remote system based on the Crawler. The system was completed and involved three crawlers to perform the needed tasks, an Ultrasonic Crawler, a Camera Crawler, and a Surface Prep Crawler. The crawlers were computer controlled so that their operation could be done remotely and their position on the wall could be tracked. The Ultrasonic Crawler controls were interfaced with ABB Amdata`s I-PC, Ultrasonic Data Acquisition System so that thickness mapping of the wall could be obtained. A second system was requested by Westinghouse Savannah River Company (WSRC), to perform just ultrasonic mapping on their similar Waste Storage Tanks; however, the system needed to be interfaced with the P-scan Ultrasonic Data Acquisition System. Both remote inspection systems were completed 9/94. Qualifications tests were conducted by WVNS prior to implementation on the actual tank and tank development was achieved 10/94. The second inspection system was deployed at WSRC 11/94 with success, and the system is now in continuous service inspecting the remaining highlevel waste tanks at WSRC.

The purpose of this plan is to document the integrated technology program plan for the Savannah River Site (SRS) High-Level Waste (HLW) Management System. The mission of the SRS HLW System is to receive and store SRS high-level wastes in a see and environmentally sound, and to convert these wastes into forms suitable for final disposal. These final disposal forms are borosilicate glass to be sent to the Federal Repository, Saltstone grout to be disposed of on site, and treated waste water to be released to the environment via a permitted outfall. Thus, the technology development activities described herein are those activities required to enable successful accomplishment of this mission. The technology program is based on specific needs of the SRS HLW System and organized following the systems engineering level 3 functions. Technology needs for each level 3 function are listed as reference, enhancements, and alternatives. Finally, FY-95 funding, deliverables, and schedules are s in Chapter IV with details on the specific tasks that are funded in FY-95 provided in Appendix A. The information in this report represents the vision of activities as defined at the beginning of the fiscal year. Depending on emergent issues, funding changes, and other factors, programs and milestones may be adjusted during the fiscal year. The FY-95 SRS HLW technology program strongly emphasizes startup support for the Defense Waste Processing Facility and In-Tank Precipitation. Closure of technical issues associated with these operations has been given highest priority. Consequently, efforts on longer term enhancements and alternatives are receiving minimal funding. However, High-Level Waste Management is committed to participation in the national Radioactive Waste Tank Remediation Technology Focus Area. 4 refs., 5 figs., 9 tabs.

The Defense HighLevel Waste Disposal Container System supports the confinement and isolation of waste within the Engineered Barrier System of the Monitored Geologic Repository (MGR). Disposal containers are loaded and sealed in the surface waste handling facilities, transferred to the underground through the accesses using a rail mounted transporter, and emplaced in emplacement drifts. The defense highlevel waste (HLW) disposal container provides long-term confinement of the commercial HLW and defense HLW (including immobilized plutonium waste forms [IPWF]) placed within disposable canisters, and withstands the loading, transfer, emplacement, and retrieval loads and environments. US Department of Energy (DOE)-owned spent nuclear fuel (SNF) in disposable canisters may also be placed in a defense HLW disposal container along with commercial HLW waste forms, which is known as co-disposal. The Defense HighLevel Waste Disposal Container System provides containment of waste for a designated period of time, and limits radionuclide release. The disposal container/waste package maintains the waste in a designated configuration, withstands maximum handling and rockfall loads, limits the individual canister temperatures after emplacement, resists corrosion in the expected handling and repository environments, and provides containment of waste in the event of an accident. Defense HLW disposal containers for HLW disposal will hold up to five HLW canisters. Defense HLW disposal containers for co-disposal will hold up to five HLW canisters arranged in a ring and one DOE SNF canister inserted in the center and/or one or more DOE SNF canisters displacing a HLW canister in the ring. Defense HLW disposal containers also will hold two Multi-Canister Overpacks (MCOs) and two HLW canisters in one disposal container. The disposal container will include outer and inner cylinders, outer and inner cylinder lids, and may include a canister guide. An exterior label will provide a means by

The U.S. Department of Energy is putting a modern version of alchemy to work to produce an answer to a decades-old problem. It is taking place at the Savannah River Site (SRS) in Aiken, South Carolina and at the West Valley Demonstration Project (WVDP) near Buffalo, New York. At both locations, contractor Westinghouse Electric Corporation is applying technology that is turning liquid high-level radioactive waste (HLW) into a stabilized, durable glass for safer and easier management. The process is called vitrification. SRS and WVDP are now operating the nation`s first full-scale HLW vitrification plants.

A large amount of radioactive waste has been stored safely at the Savannah River and Hanford sites over the past 46 years. The aim of this report is to review the experimental corrosion studies at Savannah River and Hanford with the intention of identifying the types and rates of corrosion encountered and indicate how these data contribute to tank failure predictions. The compositions of the High-Level Wastes, mild steels used in the construction of the waste tanks and degradation-modes particularly stress corrosion cracking and pitting are discussed. Current concerns at the Hanford Site are highlighted.

The purpose of this analysis is to document the Quality Assurance (QA) classification of the Monitored Geologic Repository (MGR) defense high-level waste disposal container system structures, systems and components (SSCs) performed by the MGR Safety Assurance Department. This analysis also provides the basis for revision of YMP/90-55Q, Q-List (YMP 1998). The Q-List identifies those MGR SSCs subject to the requirements of DOE/RW-0333PY ''Quality Assurance Requirements and Description'' (QARD) (DOE 1998).

The construction and calibration of a simple ionization-chamber apparatus for measurement of highlevel tritium gas is described. The apparatus uses an easily constructed but rugged chamber containing the unknown gas and an inexpensive digital multimeter for measuring the ion current. The equipment after calibration is suitable for measuring 0.01 to 100% tritium gas in hydrogen-helium mixes with an accuracy of a few percent. At both the high and low limits of measurements deviations from the predicted theoretical current are observed. These are briefly discussed.

The Department of Energy, in accord with recommendations from the Du Pont Company, has started construction of a Defense Waste Processing Facility (DWPF) at the Savannah River Plant. The facility should be completed by the end of 1988, and full-scale operation should begin in 1990. This facility will immobilize in borosilicate glass the large quantity of high-level radioactive waste now stored at the plant plus the waste to be generated from continued chemical reprocessing operations. The existing wastes at the Savannah River Plant will be completely converted by about 2010. 21 figures.

High-level neutron coincidence counter operational (field) calibration and usage is well known. This manual makes explicit basic (shop) check-out, calibration, and testing of new units and is a guide for repair of failed in-service units. Operational criteria for the major electronic functions are detailed, as are adjustments and calibration procedures, and recurrent mechanical/electromechanical problems are addressed. Some system tests are included for quality assurance. Data on nonstandard large-scale integrated (circuit) components and a schematic set are also included.

Variable renewable generation is increasing in penetration in modern power systems, leading to higher variability in the supply and price of electricity as well as lower average spot prices. This raises new challenges, particularly in ensuring sufficient capacity and flexibility from conventional technologies. Because the fixed costs and lifetimes of electricity generation investments are significant, designing markets and regulations that ensure the efficient integration of renewable generation is a significant challenge. This papers reviews the state of play of market designs for highlevels of variable generation in the United States and Europe and considers new developments in both regions.

OAK 270 - The DOE Matching Grant Program provided $50,000.00 to the Dept of N.E. at TAMU, matching a gift of $50,000.00 from TXU Electric. The $100,000.00 total was spent on scholarships, departmental labs, and computing network.

The generic imaging matching system (GIMS) provides an optimal systematic solution to any problem of color image processing in printing and publishing that can be classified as or modeled to the generic image matching problem defined. Typical GIMS systems/processes include color matching from different output devices, color conversion, color correction, device calibration, colorimetric scanner, colorimetric printer, colorimetric color reproduction, and image interpolation from scattered data. GIMS makes color matching easy for the user and maximizes operational flexibility allowing the user to obtain the degree of match wanted while providing the capability to achieve the best balance with respect to the human perception of color, color fidelity, and preservation of image information and color contrast. Instead of controlling coefficients in a transformation formula, GIMS controls the mapping directly in a standard device-independent color space, so that color can be matched, conceptually, to the highest possible accuracy. An optimization algorithm called modified vector shading was developed to minimize the matching error and to perform a 'near-neighborhood' gamut compression. An automatic error correction algorithm with a multidirection searching procedure using correlated re-initialization was developed to avoid local minimum failures. Once the mapping for color matching is generated, it can be utilized by a multidimensional linear interpolator with a small look-up-table (LUT) implemented by either software, a hardware interpolator or a digital-signal-processor.

Following an analogous distinction in statistical hypothesis testing, we investigate variants of machine learning where the training set comes in matched pairs. We demonstrate that even conventional classifiers can exhibit improved performance when the input data has a matched-pair structure. Online algorithms, in particular, converge quicker when the data is presented in pairs. In some scenarios (such as the weak signal detection problem), matched pairs can be generated from independent samples, with the effect not only doubling the nominal size of the training set, but of providing the structure that leads to better learning. A family of 'dipole' algorithms is introduced that explicitly takes advantage of matched-pair structure in the input data and leads to further performance gains. Finally, we illustrate the application of matched-pair learning to chemical plume detection in hyperspectral imagery.

The Supply-Chain Optimization Template (SCOT) is an instructional guide for identifying, evaluating, and optimizing (including re-engineering) aerospace- oriented supply chains. The SCOT was derived from the Supply Chain Council s Supply-Chain Operations Reference (SCC SCOR) Model, which is more generic and more oriented toward achieving a competitive advantage in business.

In position detection using matched filtering one is faced with the challenge of determining the best position in the presence of distortions such as defocus and diffraction noise. This work evaluates the performance of simulated defocused images as the template against the real defocused beam. It was found that an amplitude modulated phase-only filter is better equipped to deal with real defocused images that suffer from diffraction noise effects resulting in a textured spot intensity pattern. It is shown that the there is a tradeoff of performance dependent upon the type and size of the defocused image. A novel automated system was developed that can automatically select the right template type and size. Results of this automation for real defocused images are presented.

This publication is intended to provide its readers with an introduction to the issues surrounding the subject of transportation of spent nuclear fuel and high-level radioactive waste, especially as those issues impact the southern region of the United States. It was originally issued by the Southern States Energy Board (SSEB) in July 1987 as the Spent Nuclear Fuel and High-Level Radioactive Waste Transportation Primer, a document patterned on work performed by the Western Interstate Energy Board and designed as a ``comprehensive overview of the issues.`` This work differs from that earlier effort in that it is designed for the educated layman with little or no background in nuclear waste issues. In addition, this document is not a comprehensive examination of nuclear waste issues but should instead serve as a general introduction to the subject. Owing to changes in the nuclear waste management system, program activities by the US Department of Energy and other federal agencies and developing technologies, much of this information is dated quickly. While this report uses the most recent data available, readers should keep in mind that some of the material is subject to rapid change. SSEB plans periodic updates in the future to account for changes in the program. Replacement pages sew be supplied to all parties in receipt of this publication provided they remain on the SSEB mailing list.

This report examines borosilicate glass as a means of immobilizing high-level radioactive wastes. Borosilicate glass will encapsulate most of the defense and some of the commercial HLW in the US. The resulting waste forms must meet the requirements of the WA-SRD and the WAPS, which include a short term PCT durability test. The waste form producer must report the composition(s) of the borosilicate waste glass(es) produced but can choose the composition(s) to meet site-specific requirements. Although the waste form composition is the primary determinant of durability, the redox state of the glass; the existence, content, and composition of crystals; and the presence of glass-in-glass phase separation can affect durability. The waste glass should be formulated to avoid phase separation regions. The ultimate result of this effort will be a waste form which is much more stable and potentially less mobile than the liquid highlevel radioactive waste is currently.

Executive function (EF) deficits have yet to be demonstrated convincingly in children with disruptive behaviour disorders (DBD), as only a few studies have reported these. The presence of EF weaknesses in children with DBD has often been contested on account of the high comorbidity between DBD and attention-deficit/hyperactivity disorder (ADHD) and of methodological shortcomings regarding EF measures. Against this background, the link between EF and disruptive behaviours in kindergarteners was investigated using a carefully selected battery of EF measures. Three groups of kindergarteners were compared: (1) a group combining highlevels of disruptive behaviours and ADHD symptoms (COMB); (2) a group presenting highlevels of disruptive/aggressive behaviours and low levels of ADHD symptoms (AGG); and (3) a normative group (NOR). Children in the COMB and AGG groups presented weaker inhibition capacities compared with normative peers. Also, only the COMB group showed weaker working memory capacities compared with the NOR group. Results support the idea that preschool children with DBD have weaker inhibition capacities and that this weakness could be common to both ADHD and DBD. PMID:26198079

Microbial enzymes have been used in a large number of fields, such as chemical, agricultural and biopharmaceutical industries. The enzyme production rate and yield are the main factors to consider when choosing the appropriate expression system for the production of recombinant proteins. Recombinant enzymes have been expressed in bacteria (e.g., Escherichia coli, Bacillus and lactic acid bacteria), filamentous fungi (e.g., Aspergillus) and yeasts (e.g., Pichia pastoris). The favorable and very advantageous characteristics of these species have resulted in an increasing number of biotechnological applications. Bacterial hosts (e.g., E. coli) can be used to quickly and easily overexpress recombinant enzymes; however, bacterial systems cannot express very large proteins and proteins that require post-translational modifications. The main bacterial expression hosts, with the exception of lactic acid bacteria and filamentous fungi, can produce several toxins which are not compatible with the expression of recombinant enzymes in food and drugs. However, due to the multiplicity of the physiological impacts arising from high-level expression of genes encoding the enzymes and expression hosts, the goal of overproduction can hardly be achieved, and therefore, the yield of recombinant enzymes is limited. In this review, the recent strategies used for the high-level expression of microbial enzymes in the hosts mentioned above are summarized and the prospects are also discussed. We hope this review will contribute to the development of the enzyme-related research field. PMID:23686280

Space disposal of selected components of military high-level waste (HLW) is considered. This disposal option offers the promise of eliminating the long-lived radionuclides in military HLW from the earth. A space mission which meets the dual requirements of long-term orbital stability and a maximum of one space shuttle launch per week over a period of 20-40 years, is a heliocentric orbit about halfway between the orbits of earth and Venus. Space disposal of high-level radioactive waste is characterized by long-term predictability and short-term uncertainties which must be reduced to acceptably low levels. For example, failure of either the Orbit Transfer Vehicle after leaving low earth orbit, or the storable propellant stage failure at perihelion would leave the nuclear waste package in an unplanned and potentially unstable orbit. Since potential earth reencounter and subsequent burn-up in the earth's atmosphere is unacceptable, a deep space rendezvous, docking, and retrieval capability must be developed.

This publication is intended to provide its readers with an introduction to the issues surrounding the subject of transportation of spent nuclear fuel and high-level radioactive waste, especially as those issues impact the southern region of the United States. It was originally issued by the Southern States Energy Board (SSEB) in July 1987 as the Spent Nuclear Fuel and High-Level Radioactive Waste Transportation Primer, a document patterned on work performed by the Western Interstate Energy Board and designed as a ``comprehensive overview of the issues.`` This work differs from that earlier effort in that it is designed for the educated layman with little or no background in nuclear waste issues. In addition, this document is not a comprehensive examination of nuclear waste issues but should instead serve as a general introduction to the subject. Owing to changes in the nuclear waste management system, program activities by the US Department of Energy and other federal agencies and developing technologies, much of this information is dated quickly. While this report uses the most recent data available, readers should keep in mind that some of the material is subject to rapid change. SSEB plans periodic updates in the future to account for changes in the program. Replacement pages will be supplied to all parties in receipt of this publication provided they remain on the SSEB mailing list.

This publication is intended to provide its readers with an introduction to the issues surrounding the subject of transportation of spent nuclear fuel and high-level radioactive waste, especially as those issues impact the southern region of the United States. It was originally issued by SSEB in July 1987 as the Spent Nuclear Fuel and High-Level Radioactive Waste Transportation Primer, a document patterned on work performed by the Western Interstate Energy Board and designed as a ``comprehensive overview of the issues.`` This work differs from that earlier effort in that it is designed for the educated layman with little or no background in nuclear waste Issues. In addition. this document is not a comprehensive examination of nuclear waste issues but should instead serve as a general introduction to the subject. Owing to changes in the nuclear waste management system, program activities by the US Department of Energy and other federal agencies and developing technologies, much of this information is dated quickly. While this report uses the most recent data available, readers should keep in mind that some of the material is subject to rapid change. SSEB plans periodic updates in the future to account for changes in the program. Replacement pages will be supplied to all parties in receipt of this publication provided they remain on the SSEB mailing list.

This invention is a robot control system based on a highlevel language implementing a spatial operator algebra. There are two highlevel languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

This invention is a robot control system based on a highlevel language implementing a spatial operator algebra. There are two highlevel languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

The objective of this study was to experimentally measure the properties and performance of a series of glasses with compositions that could represent highlevel waste Sludge Batch 5 (SB5) as vitrified at the Savannah River Site Defense Waste Processing Facility. These data were used to guide frit optimization efforts as the SB5 composition was finalized. Glass compositions for this study were developed by combining a series of SB5 composition projections with a group of candidate frits. The study glasses were fabricated using depleted uranium and their chemical compositions, crystalline contents and chemical durabilities were characterized. Trevorite was the only crystalline phase that was identified in a few of the study glasses after slow cooling, and is not of concern as spinels have been shown to have little impact on the durability of highlevel waste glasses. Chemical durability was quantified using the Product Consistency Test (PCT). All of the glasses had very acceptable durability performance. The results of this study indicate that a frit composition can be identified that will provide a processable and durable glass when combined with SB5.

Waste streams planned for generation by the Global Nuclear Energy Partnership (GNEP) and existing radioactive HighLevel Waste (HLW) streams containing organic compounds such as the Tank 48H waste stream at Savannah River Site have completed simulant and radioactive testing, respectfully, by Savannah River National Laboratory (SRNL). GNEP waste streams will include up to 53 wt% organic compounds and nitrates up to 56 wt%. Decomposition of high nitrate streams requires reducing conditions, e.g. provided by organic additives such as sugar or coal, to reduce NOX in the off-gas to N2 to meet Clean Air Act (CAA) standards during processing. Thus, organics will be present during the waste form stabilization process regardless of the GNEP processes utilized and exists in some of the highlevel radioactive waste tanks at Savannah River Site and Hanford Tank Farms, e.g. organics in the feed or organics used for nitrate destruction. Waste streams containing high organic concentrations cannot be stabilized with the existing HLW Best Developed Available Technology (BDAT) which is HLW vitrification (HLVIT) unless the organics are removed by pretreatment. The alternative waste stabilization pretreatment process of Fluidized Bed Steam Reforming (FBSR) operates at moderate temperatures (650-750 C) compared to vitrification (1150-1300 C). The FBSR process has been demonstrated on GNEP simulated waste and radioactive waste containing high organics from Tank 48H to convert organics to CAA compliant gases, create no secondary liquid waste streams and create a stable mineral waste form.

Describes two general spreadsheet templates to carry out all types of one-equation chemical equilibrium calculations encountered by students in undergraduate chemistry courses. Algorithms, templates, macros, and representative examples are presented to illustrate the approach. (PR)

This thesis combines two ubiquitous trends in the VLSI design world--the move towards designing at higher levels of design abstraction, and the increasing importance of power consumption as a design metric. Power estimation and optimization tools are becoming an increasingly important part of design flows, driven by a variety of requirements such as prolonging battery life in portable computing and communication devices, thermal considerations and system cooling and packaging costs, reliability issues (e.g. electromigration, ground bounce, and I-R drops in the power network), and environmental concerns. This thesis presents a suite of techniques to automatically perform power analysis and optimization for designs at the architecture or register-transfer, and behavior or algorithm levels of the design hierarchy. High-level synthesis refers to the process of synthesizing, from an abstract behavioral description, a register-transfer implementation that satisfies the desired constraints. High-level synthesis tools typically perform one or more of the following tasks: transformations, module selection, clock selection, scheduling, and resource allocation and assignment (also called resource sharing or hardware sharing). High-level synthesis techniques for minimizing the area, maximizing the performance, and enhancing the testability of the synthesized designs have been investigated. This thesis presents high-level synthesis techniques that minimize power consumption in the synthesized data paths. This thesis investigates the effects of resource sharing on the power consumption in the data path, provides techniques to efficiently estimate power consumption during resource sharing, and resource sharing algorithms to minimize power consumption. The RTL circuit that is obtained from the high-level synthesis process can be further optimized for power by applying power-reducing RTL transformations. This thesis presents macro-modeling and estimation techniques for switching

Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.

To date there is no generally accepted method to test the validity of algorithms used to compute likelihood ratios (LR) evaluating forensic DNA profiles from low-template and/or degraded samples. An upper bound on the LR is provided by the inverse of the match probability, which is the usual measure of weight of evidence for standard DNA profiles not subject to the stochastic effects that are the hallmark of low-template profiles. However, even for low-template profiles the LR in favour of a true prosecution hypothesis should approach this bound as the number of profiling replicates increases, provided that the queried contributor is the major contributor. Moreover, for sufficiently many replicates the standard LR for mixtures is often surpassed by the low-template LR. It follows that multiple LTDNA replicates can provide stronger evidence for a contributor to a mixture than a standard analysis of a good-quality profile. Here, we examine the performance of the likeLTD software for up to eight replicate profiling runs. We consider simulated and laboratory-generated replicates as well as resampling replicates from a real crime case. We show that LRs generated by likeLTD usually do exceed the mixture LR given sufficient replicates, are bounded above by the inverse match probability and do approach this bound closely when this is expected. We also show good performance of likeLTD even when a large majority of alleles are designated as uncertain, and suggest that there can be advantages to using different profiling sensitivities for different replicates. Overall, our results support both the validity of the underlying mathematical model and its correct implementation in the likeLTD software. PMID:25082140

To date there is no generally accepted method to test the validity of algorithms used to compute likelihood ratios (LR) evaluating forensic DNA profiles from low-template and/or degraded samples. An upper bound on the LR is provided by the inverse of the match probability, which is the usual measure of weight of evidence for standard DNA profiles not subject to the stochastic effects that are the hallmark of low-template profiles. However, even for low-template profiles the LR in favour of a true prosecution hypothesis should approach this bound as the number of profiling replicates increases, provided that the queried contributor is the major contributor. Moreover, for sufficiently many replicates the standard LR for mixtures is often surpassed by the low-template LR. It follows that multiple LTDNA replicates can provide stronger evidence for a contributor to a mixture than a standard analysis of a good-quality profile. Here, we examine the performance of the likeLTD software for up to eight replicate profiling runs. We consider simulated and laboratory-generated replicates as well as resampling replicates from a real crime case. We show that LRs generated by likeLTD usually do exceed the mixture LR given sufficient replicates, are bounded above by the inverse match probability and do approach this bound closely when this is expected. We also show good performance of likeLTD even when a large majority of alleles are designated as uncertain, and suggest that there can be advantages to using different profiling sensitivities for different replicates. Overall, our results support both the validity of the underlying mathematical model and its correct implementation in the likeLTD software. PMID:25082140

The integration of high quality multifunctional oxides on semiconductor devices requires single-crystal-like templates directly on silicon that match with thin film heterostructures. We report the fabrication of quasi-single-crystal (001) SrTiO3 templates on (001) Si by annealing MBE-grown epitaxial SrTiO3 films at 900oC in oxygen. The FWHM of (002) SrTiO3 rocking curve is less than 0.006o which is much narrower than SrTiO3 bulk single crystals. Atomically smooth TiO2-terminated surface is obtained by buffered-HF etching, which presents the possibility of creating novel oxide heterointerfaces on Si platform. Epitaxial SrRuO3 films grown on the etched template exhibit a superior crystalline quality to those grown on a untreated template and an atomically smooth surface.

A novel methodology has been developed which makes possible a very fast running computational tool, capable of performing 30 to 50 years of simulation of the entire Savannah River Site (SRS) highlevel waste complex in less than 2 minutes on a work station. The methodology has been implemented in the Production Planning Model (ProdMod) simulation code which uses Aspen Technology's dynamic simulation software development package SPEEDUP. ProdMod is a pseudo-dynamic simulation code solely based on algebraic equations, using no differential equations. The dynamic nature of the plant process is captured using linear constructs in which the time dependence is implicit. Another innovative approach implemented in ProdMod development is the mapping of event-space on to time-space and vice versa, which accelerates the computation without sacrificing the necessary details in the event-space. ProdMod uses this approach in coupling the time-space continuous simulation with the event-space batch simulation, avoiding the discontinuities inherent in dynamic simulation batch processing. In addition, a general purpose optimization scheme has been devised based on the pseudo-dynamic constructs and the event- and time-space algorithms of ProdMod. The optimization scheme couples a FORTRAN based stand-alone optimization driver with the SPEEDUP based ProdMod simulator to perform dynamic optimization. The scheme is capable of generating single or multiple optimal input conditions for different types of objective functions over single or multiple years of operations depending on the nature of the objective function and operating constraints. The resultant optimal inputs are then interfaced with ProdMod to simulate the dynamic behavior of the waste processing operations. At the conclusion on an optimized advancement step, the simulation parameters are then passed to the optimization driver to generate the next set of optimized parameters. An optimization algorithm using linear programming

Information about compressional and shear wave velocity or their ratio gives the possibility to interpret data concerning lithology and pore content, like the AVO analysis. Our data set (dolomite and limestone samples from Austria) is presented in a rock physics template, which is used as a tool for seismic interpretation for lithology and pore fluid characterization. In the rock physics template the ratio of compressional (vp) and shear wave (vs) velocity is plotted versus the acoustic impedance. Additionally presented are calculated model lines for the description and interpretation of the measured data. Used are the model by Kuster and Tosköz and the Hashin-Shtrikman bounds. Compressional and shear wave velocity on dry and brine saturated samples are determined in the laboratory. The rock physics template displays a clear separation between dry and saturated measured data. Additionally, the saturated measured data show a "shear weakening effect" that has been reported in the literature before by other researchers. The Kuster and Toksöz model is able to describe the measured data fairly well. However, different aspect ratios are needed to match the different vp and vs data for the different rock types. Additionally this model type could not describe our data in the rock physics template. The calculated Hashin-Shtrikman bounds for the rock physics template show good results for the saturated samples. The upper bound and the 75% and 50% of the upper bound can describe the measured data and can be used for an interpretation. For the dry measured data the correlations did not work sufficiently well.

Gravitational waves from coalescing compact binaries are one of the most promising sources for detectors such as LIGO, Virgo, and GEO600. If the components of the binary possess significant angular momentum (spin), as is likely to be the case if one component is a black hole, spin-induced precession of a binary's orbital plane causes modulation of the gravitational-wave amplitude and phase. If the templates used in a matched-filter search do not accurately model these effects then the sensitivity, and hence the detection rate, will be reduced. We investigate the ability of several search pipelines to detect gravitational waves from compact binaries with spin. We use the post-Newtonian approximation to model the inspiral phase of the signal and construct two new template banks using the phenomenological waveforms of Buonanno, Chen, and Vallisneri [A. Buonanno, Y. Chen, and M. Vallisneri, Phys. Rev. D 67, 104025 (2003)]. We compare the performance of these template banks to that of banks constructed using the stationary phase approximation to the nonspinning post-Newtonian inspiral waveform currently used by LIGO and Virgo in the search for compact binary coalescence. We find that, at the same false alarm rate, a search pipeline using phenomenological templates is no more effective than a pipeline which uses nonspinning templates. We recommend the continued use of the nonspinning stationary phase template bank until the false alarm rate associated with templates which include spin effects can be substantially reduced.

Most species of social insects have singly mated queens, although there are notable exceptions. Competing hypotheses have been proposed to explain the evolution of highlevels of multiple mating, but this issue is far from resolved. Here we use microsatellites to investigate mating frequency in the army ant Eciton burchellii and show that queens mate with an exceptionally large number of males, eclipsing all but one other social insect species for which data are available. In addition we present evidence that suggests that mating is serial, continuing throughout the lifetime of the queen. This is the first demonstration of serial mating among social hymenoptera. We propose that high paternity within colonies is most likely to have evolved to increase genetic diversity and to counter high pathogen and parasite loads.

We report on the status of the ALICE project (Archival Legacy Investigation of Circumstellar Environments), which consists in a consistent reanalysis of the entire HST-NICMOS coronagraphic archive. Over the last two years, we have developed a sophisticated pipeline able to handle the data of the 400 stars of the archive. This pipeline builds on the Karhunen-Loeve Image Projection (KLIP) algorithm, and was completed in the fall of 2014. We discuss the first processing and analysis results of the overall reduction campaign. As we will deliver high-level science products to the STScI MAST archive, we are defining a new standard format for high-contrast science products, which will be compatible with every new high-contrast imaging instrument (GPI, SPHERE, P1640, CHARIS, etc.) and used by the JWST coronagraphs. We present here the specifications of this standard.

Setpoints for nuclear safety-related instrumentation are required for actions determined by the design authorization basis. Minimum requirements need to be established for assuring that setpoints are established and held within specified limits. This document establishes the controlling methodology for changing setpoints of all classifications. The instrumentation under consideration involve the transfer, storage, and volume reduction of radioactive liquid waste in the F- and H-Area High-Level Radioactive Waste Tank Farms. The setpoint document will encompass the PROCESS AREA listed in the Safety Analysis Report (SAR) (DPSTSA-200-10 Sup 18) which includes the diversion box HDB-8 facility. In addition to the PROCESS AREAS listed in the SAR, Building 299-H and the Effluent Transfer Facility (ETF) are also included in the scope.

This research evaluates the ability of OLI{copyright} equilibrium based software to forecast Savannah River Site HighLevel Waste system impacts from oxalic acid dissolution of Tank 1-15 sludge heels. Without further laboratory and field testing, only the use of oxalic acid can be considered plausible to support sludge heel dissolution on multiple tanks. Using OLI{copyright} and available test results, a dissolution model is constructed and validated. Material and energy balances, coupled with the model, identify potential safety concerns. Overpressurization and overheating are shown to be unlikely. Corrosion induced hydrogen could, however, overwhelm the tank ventilation. While pH adjustment can restore the minimal hydrogen generation, resultant precipitates will notably increase the sludge volume. OLI{copyright} is used to develop a flowsheet such that additional sludge vitrification canisters and other negative system impacts are minimized. Sensitivity analyses are used to assess the processability impacts from variations in the sludge/quantities of acids.

The CMS HighLevel Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.

The socioeconomic investigations of possible impacts of the proposed repository for high-level nuclear waste at Yucca Mountain, Nevada, have been unprecedented in several respects. They bear on the public decision that sooner or later will be made as to where and how to dispose permanently of the waste presently at military weapons installations and that continues to accumulate at nuclear power stations. No final decision has yet been made. There is no clear precedent from other countries. The organization of state and federal studies is unique. The state studies involve more disciplines than any previous efforts. They have been carried out in parallel to federal studies and have pioneered in defining some problems and appropriate research methods. A recent annotated bibliography provides interested scientists with a compact guide to the 178 published reports, as well as to relevant journal articles and related documents. PMID:7971963

The socioeconomic investigations of possible impacts of the proposed repository for high-level nuclear waste at Yucca Mountain, Nevada, have been unprecedented in several respects. They bear on the public decision that sooner or later will be made as to where and how to dispose permanently of the waste presently at military weapons installations and that continues to accumulate at nuclear power stations. No final decision has yet been made. There is no clear precedent from other countries. The organization of state and federal studies is unique. The state studies involve more disciplines than any previous efforts. They have been carried out in parallel to federal studies and have pioneered in defining some problems and appropriate research methods. A recent annotated bibliography provides interested scientists with a compact guide to the 178 published reports, as well as to relevant journal articles and related documents. PMID:7971963

Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208

The Defense Waste Processing Facility (DWPF) at the Savannah River Site vitrifies HighLevel Waste (HLW) for repository internment. The process consists of three major steps: waste pretreatment, vitrification, and canister decontamination/sealing. The HLW consists of insoluble metal hydroxides (primarily iron, aluminum, magnesium, manganese, and uranium) and soluble sodium salts (carbonate, hydroxide, nitrite, nitrate, and sulfate). The HLW is processed in large batches through DWPF; DWPF has recently completed processing Sludge Batch 3 (SB3) and is currently processing Sludge Batch 4 (SB4). The composition of metal species in SB4 is shown in Table 1 as a function of the ratio of a metal to iron. Simulants remove radioactive species and renormalize the remaining species. Supernate composition is shown in Table 2.

In studying the Recycler highlevel RF, it was found that at 89 kHz, the lowest frequency required by the system, some nonlinearities in magnitude and phase were discovered. The visible evidence of this was that beam injected in a barrier bucket had a definite slope at the top. Using a network analyzer, the S-parameter S{sub 21} was realized for the overall system and from mathematical modeling a second order numerator and denominator transfer function was found. The inverse of this transfer function gives their linearization transfer function. The linearization transfer function was realized in hardware by summing a high pass, band pass and low pass filter together. The resulting magnitude and phase plots, along with actual beam response will be shown.

Remote operability and maintainability of vitrification equipment were assessed under shielded-cell conditions. The equipment tested will be applied to immobilize high-level and transuranic liquid waste slurries that resulted from plutonium production for defense weapons. Equipment tested included: a turntable for handling waste canisters under the melter; a removable discharge cone in the melter overflow section; a thermocouple jumper that extends into a shielded cell; remote instrument and electrical connectors; remote, mechanical, and heat transfer aspects of the melter glass overflow section; a reamer to clean out plugged nozzles in the melter top; a closed circuit camera to view the melter interior; and a device to retrieve samples of the glass product. A test was also conducted to evaluate liquid metals for use in a liquid metal sealing system.

This paper discusses impairments of high-level, complex language production in Parkinson's disease (PD), defined as sentence and discourse production, and situates these impairments within the framework of current psycholinguistic theories of language production. The paper comprises three major sections, an overview of the effects of PD on the brain and cognition, a review of the literature on language production in PD, and a discussion of the stages of the language production process that are impaired in PD. Overall, the literature converges on a few common characteristics of language production in PD: reduced information content, impaired grammaticality, disrupted fluency, and reduced syntactic complexity. Many studies also document the strong impact of differences in cognitive ability on language production. Based on the data, PD affects all stages of language production including conceptualization and functional and positional processing. Furthermore, impairments at all stages appear to be exacerbated by impairments in cognitive abilities. PMID:21860777

A review of the data collected during ultrasonic inspection of the Type I highlevel waste tanks has been completed. The data was analyzed for relevance to the possibility of vapor space corrosion and liquid/air interface corrosion. The review of the Type I tank UT inspection data has confirmed that the vapor space general corrosion is not an unusually aggressive phenomena and correlates well with predicted corrosion rates for steel exposed to bulk solution. The corrosion rates are seen to decrease with time as expected. The review of the temperature data did not reveal any obvious correlations between high temperatures and the occurrences of leaks. The complex nature of temperature-humidity interaction, particularly with respect to vapor corrosion requires further understanding to infer any correlation. The review of the waste level data also did not reveal any obvious correlations.

The Tank Waste Remediation System (TWRS) Storage and Disposal Project has established the Immobilized High-Level Waste (IBLW) Storage Sub-Project to provide the capability to store Phase I and II BLW products generated by private vendors. A design/construction project, Project W-464, was established under the Sub-Project to provide the Phase I capability. Project W-464 will retrofit the Hanford Site Canister Storage Building (CSB) to accommodate the Phase I I-ILW products. Project W-464 conceptual design is currently being performed to interim store 3.0 m-long BLW stainless steel canisters with a 0.61 in diameter, DOE is considering using a 4.5 in canister of the same diameter to reduce permanent disposal costs. This study was performed to assess the impact of replacing the 3.0 in canister with the 4.5 in canister. The summary cost and schedule impacts are described.

For more than half a century, the Council of State Governments has served as a common ground for the states of the nation. The Council is a nonprofit, state-supported and -directed service organization that provides research and resources, identifies trends, supplies answers and creates a network for legislative, executive and judicial branch representatives. This List of Available Resources was prepared with the support of the US Department of Energy, Cooperative Agreement No. DE-FC02-89CH10402. However, any opinions, findings, conclusions, or recommendations expressed herein are those of the author(s) and do not necessarily reflect the views of DOE. The purpose of the agreement, and reports issued pursuant to it, is to identify and analyze regional issues pertaining to the transportation of high-level radioactive waste and to inform Midwestern state officials with respect to technical issues and regulatory concerns related to waste transportation.

ALPHN calculates the (alpha,n) neutron production rate of a canister of vitrified high-level waste. The user supplies the chemical composition of the glass or glass-ceramic and the curies of the alpha-emitting actinides present. The output of the program gives the (alpha,n) neutron production of each actinide in neutrons per second and the total for the canister. The (alpha,n) neutron production rates are source terms only; that is, they are production rates within the glass andmore » do not take into account the shielding effect of the glass. For a given glass composition, the user can calculate up to eight cases simultaneously; these cases are based on the same glass composition but contain different quantities of actinides per canister.« less

Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208

Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information. Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial developments, numerous models of information fusion have emerged, hoping to better capture the human-centric process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of highlevel information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling, and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing highlevel information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and repurposed data in a cohesive manner. FURNACE supports analyst's efforts to develop situation models, threat models, and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence markets.

Hepatitis B virus (HBV) transgenic mice whose hepatocytes replicate the virus at levels comparable to that in the infected livers of patients with chronic hepatitis have been produced, without any evidence of cytopathology. High-level viral gene expression was obtained in the liver and kidney tissues in three independent lineages. These animals were produced with a terminally redundant viral DNA construct (HBV 1.3) that starts just upstream of HBV enhancer I, extends completely around the circular viral genome, and ends just downstream of the unique polyadenylation site in HBV. In these animals, the viral mRNA is more abundant in centrilobular hepatocytes than elsewhere in the hepatic lobule. High-level viral DNA replication occurs inside viral nucleocapsid particles that preferentially form in the cytoplasm of these centrilobular hepatocytes, suggesting that an expression threshold must be reached for nucleocapsid assembly and viral replication to occur. Despite the restricted distribution of the viral replication machinery in centrilobular cytoplasmic nucleocapsids, nucleocapsid particles are detectable in the vast majority of hepatocyte nuclei throughout the hepatic lobule. The intranuclear nucleocapsid particles are empty, however, suggesting that viral nucleocapsid particle assembly occurs independently in the nucleus and the cytoplasm of the hepatocyte and implying that cytoplasmic nucleocapsid particles do not transport the viral genome across the nuclear membrane into the nucleus during the viral life cycle. This model creates the opportunity to examine the influence of viral and host factors on HBV pathogenesis and replication and to assess the antiviral potential of pharmacological agents and physiological processes, including the immune response. PMID:7666518

Eleven major Department of Energy (DOE) site contractors were chartered by the Assistant Secretary to use a systems engineering approach to develop and evaluate technically defensible cost savings opportunities across the complex. Known as the complex-wide Environmental Management Integration (EMI), this process evaluated all the major DOE waste streams including highlevel waste (HLW). Across the DOE complex, this waste stream has the highest life cycle cost and is scheduled to take until at least 2035 before all HLW is processed for disposal. Technical contract experts from the four DOE sites that manage highlevel waste participated in the integration analysis: Hanford, Savannah River Site (SRS), Idaho National Engineering and Environmental Laboratory (INEEL), and West Valley Demonstration Project (WVDP). In addition, subject matter experts from the Yucca Mountain Project and the Tanks Focus Area participated in the analysis. Also, departmental representatives from the US Department of Energy Headquarters (DOE-HQ) monitored the analysis and results. Workouts were held throughout the year to develop recommendations to achieve a complex-wide integrated program. From this effort, the HLW Environmental Management (EM) Team identified a set of programmatic and technical opportunities that could result in potential cost savings and avoidance in excess of $18 billion and an accelerated completion of the HLW mission by seven years. The cost savings, schedule improvements, and volume reduction are attributed to a multifaceted HLW treatment disposal strategy which involves waste pretreatment, standardized waste matrices, risk-based retrieval, early development and deployment of a shipping system for glass canisters, and reasonable, low cost tank closure.

Molecular chemistry contains many difficult optimization problems that have begun to attract the attention of optimizers in the Operations Research community. Problems including protein folding, molecular conformation, molecular similarity, and molecular matching have been addressed. Minimum energy conformations for simple molecular structures such as water clusters, Lennard-Jones microclusters, and short polypeptides have dominated the literature to date. However, a variety of interesting problems exist and we focus here on a molecular structure matching (MSM) problem.

This paper introduces a statistical method to decide whether two blocks in a pair of images match reliably. The method ensures that the selected block matches are unlikely to have occurred "just by chance." The new approach is based on the definition of a simple but faithful statistical background model for image blocks learned from the image itself. A theorem guarantees that under this model, not more than a fixed number of wrong matches occurs (on average) for the whole image. This fixed number (the number of false alarms) is the only method parameter. Furthermore, the number of false alarms associated with each match measures its reliability. This a contrario block-matching method, however, cannot rule out false matches due to the presence of periodic objects in the images. But it is successfully complemented by a parameterless self-similarity threshold. Experimental evidence shows that the proposed method also detects occlusions and incoherent motions due to vehicles and pedestrians in nonsimultaneous stereo. PMID:22442122

For rotating machines, the localized faults of key components generally represent as periodic transient impulses in vibration signals. The existence of background noise will corrupt transient impulses in practice, and will thus increase the difficulty to identify specific faults. This paper combines the concepts of time-frequency manifold (TFM) and image templatematching, and proposes a novel TFM correlation matching method to enhance identification of the periodic faults. This method is to conduct correlation matching of a vibration signal in the time-frequency domain by using the TFM with a short duration as a template. By this method, the time-frequency distribution (TFD) of a vibration signal is firstly achieved by the Smoothed Pseudo-Wigner-Ville distribution (SPWVD) method. Then the TFM template is learned to do correlation matching with the TFD of the analyzed signal. Finally, the ridge is extracted from the correlation matching image and the ridge coefficients are analyzed for periodic fault identification. The proposed method takes advantages of the TFM in noise suppression and templatematching in object enhancement, and can enhance the fault impulses of interest in a unified scale. The novel method is verified to be superior to traditional enveloping method with providing smoother and clearer fault impulse component via applications to gearbox fault detection and bearing defect identification.

A novel scheme for securing biometric templates of variable size and order is proposed. The proposed scheme is based on a new similarity measure approach, namely the set intersection, which strongly resembles the methodology used in most of the current state-of-the-art biometrics matching systems. The applicability of the new scheme is compared with that of the existing principal schemes, and it is shown that the new scheme has definite advantages over the existing approaches. The proposed scheme is analyzed both in terms of security and performance.

We have developed a process for fabricating nanoscale wires using DNA templates. The templates were subsequently decorated with gold nanoparticles to make metallic wires. We have successfully deposited linear, straight sections of random (λ-phage) and regular-repeat sequences of DNA, of various lengths, on oxidized silicon substrates. We have also successfully deposited thiolated DNA on gold electrodes, allowing the DNA to electrically bridge gaps between electrode pairs. Electrode gaps ranged from 50 nm to 300 nm, fabricated using electron beam lithography. We decorated the DNA with gold nanoparticles with diameters in the range of 1-13 nm, and have used the nanoparticles as nucleation sites for the growth of continuous gold wires. We have performed AFM characterization of all surfaces and structures. In addition, we have performed current-voltage measurements on the undecorated DNA, the nanoparticle-decorated DNA, and the gold nanowires.

Template-based searches for gravitational waves are often limited by the computational cost associated with searching large parameter spaces. The study of efficient template banks, in the sense of using the smallest number of templates, is therefore of great practical interest. The traditional approach to template-bank construction requires every point in parameter space to be covered by at least one template, which rapidly becomes inefficient at higher dimensions. Here we study an alternative approach, where any point in parameter space is covered only with a given probability {eta}<1. We find that by giving up complete coverage in this way, large reductions in the number of templates are possible, especially at higher dimensions. The prime examples studied here are random template banks in which templates are placed randomly with uniform probability over the parameter space. In addition to its obvious simplicity, this method turns out to be surprisingly efficient. We analyze the statistical properties of such random template banks, and compare their efficiency to traditional lattice coverings. We further study relaxed lattice coverings (using Z{sub n} and A{sub n}* lattices), which similarly cover any signal location only with probability {eta}. The relaxed A{sub n}* lattice is found to yield the most efficient template banks at low dimensions (n < or approx. 10), while random template banks increasingly outperform any other method at higher dimensions.

In this paper, a novel type of colloidal template with broken symmetry was generated using commercial, inductively coupled plasma reactive ion etching (ICP-RIE). With proper but simple treatment, the traditional symmetric non-close-packed colloidal template evolves into an elliptical profile with high uniformity. This unique feature can add flexibility to colloidal lithography and/or other lithography techniques using colloidal particles as building blocks to fabricate nano-/micro-structures with broken symmetry. Beyond that the novel colloidal template we developed possesses on-site tunability, i.e. the transformability from a symmetric into an asymmetric template. Sandwich-type particles with eccentric features were fabricated utilizing this tunable template. This distinguishing feature will provide the possibility to fabricate structures with unique asymmetric features using one set of colloidal template, providing flexibility and broad tunability to enable nano-/micro-structure fabrication with colloidal templates.

Metallic nanodisks and a method of making them. The metallic nanodisks are wheel-shaped structures that that provide large surface areas for catalytic applications. The metallic nanodisks are grown within bicelles (disk-like micelles) that template the growth of the metal in the form of approximately circular dendritic sheets. The zero-valent metal forming the nanodisks is formed by reduction of a metal ion using a suitable electron donor species.

The Defense HighLevel Waste Disposal Container System supports the confinement and isolation of waste within the Engineered Barrier System of the Monitored Geologic Repository (MGR). Disposal containers are loaded and sealed in the surface waste handling facilities, transferred to the underground through the accesses using a rail mounted transporter, and emplaced in emplacement drifts. The defense highlevel waste (HLW) disposal container provides long-term confinement of the commercial HLW and defense HLW (including immobilized plutonium waste forms (IPWF)) placed within disposable canisters, and withstands the loading, transfer, emplacement, and retrieval loads and environments. U.S. Department of Energy (DOE)-owned spent nuclear fuel (SNF) in disposable canisters may also be placed in a defense HLW disposal container along with commercial HLW waste forms, which is known as 'co-disposal'. The Defense HighLevel Waste Disposal Container System provides containment of waste for a designated period of time, and limits radionuclide release. The disposal container/waste package maintains the waste in a designated configuration, withstands maximum handling and rockfall loads, limits the individual canister temperatures after emplacement, resists corrosion in the expected handling and repository environments, and provides containment of waste in the event of an accident. Defense HLW disposal containers for HLW disposal will hold up to five HLW canisters. Defense HLW disposal containers for co-disposal will hold up to five HLW canisters arranged in a ring and one DOE SNF canister in the ring. Defense HLW disposal containers also will hold two Multi-Canister Overpacks (MCOs) and two HLW canisters in one disposal container. The disposal container will include outer and inner cylinders, outer and inner cylinder lids, and may include a canister guide. An exterior label will provide a means by which to identify the disposal container and its contents. Different materials

Coping with nonlinear distortions in fingerprint matching is a challenging task. This paper proposes a novel algorithm, normalized fuzzy similarity measure (NFSM), to deal with the nonlinear distortions. The proposed algorithm has two main steps. First, the template and input fingerprints were aligned. In this process, the local topological structure matching was introduced to improve the robustness of global alignment. Second, the method NFSM was introduced to compute the similarity between the template and input fingerprints. The proposed algorithm was evaluated on fingerprints databases of FVC2004. Experimental results confirm that NFSM is a reliable and effective algorithm for fingerprint matching with nonliner distortions. The algorithm gives considerably higher matching scores compared to conventional matching algorithms for the deformed fingerprints. PMID:16519361

Reconstruction of a non-united scaphoid with a humpback deformity involves resection of the non-union followed by bone grafting and fixation of the fragments. Intraoperative control of the reconstruction is difficult owing to the complex three-dimensional shape of the scaphoid and the other carpal bones overlying the scaphoid on lateral radiographs. We developed a titanium template that fits exactly to the surfaces of the proximal and distal scaphoid poles to define their position relative to each other after resection of the non-union. The templates were designed on three-dimensional computed tomography reconstructions and manufactured using selective laser melting technology. Ten conserved human wrists were used to simulate the reconstruction. The achieved precision measured as the deviation of the surface of the reconstructed scaphoid from its virtual counterpart was good in five cases (maximal difference 1.5 mm), moderate in one case (maximal difference 3 mm) and inadequate in four cases (difference more than 3 mm). The main problems were attributed to the template design and can be avoided by improved pre-operative planning, as shown in a clinical case. PMID:25167978

The Little Template Library is an expression templates based C++ library for array processing, image processing, FITS and ASCII I/O, and linear algebra. It is released under the GNU Public License (GPL). Although the library is developed with application to astronomical image and data processing in mind, it is by no means restricted to these fields of application. In fact, it qualifies as a fully general array processing package. Focus is laid on a high abstraction level regarding the handling of expressions involving arrays or parts thereof and linear algebra related operations without the usually involved negative impact on performance. The price to pay is dependence on a compiler implementing enough of the current ANSI C++ specification, as well as significantly higher demand on resources at compile time. The LTL provides dynamic arrays of up to 5 dimensions, sub-arrays and slicing, support for fixed size vectors and matrices including basic linear algebra operations, expression templates based evaluation, and I/O facilities for columnar ASCII and FITS format files. In addition it supplies utility classes for statistics, linear and non-linear least squares fitting, and command line and configuration file parsing. YODA (Drory 2002) and all elements of the WeCAPP reduction pipeline (Riffeser et al. 2001, Gössl & Riffeser 2002, 2003) were implemented using the LTL.

We propose a novel linearly augmented tree method for efficient scale and rotation invariant object matching. The proposed method enforces pairwise matching consistency defined on trees, and high-order constraints on all the sites of a template. The pairwise constraints admit arbitrary metrics while the high-order constraints use L1 norms and therefore can be linearized. Such a linearly augmented tree formulation introduces hyperedges and loops into the basic tree structure. But, different from a general loopy graph, its special structure allows us to relax and decompose the optimization into a sequence of tree matching problems that are efficiently solvable by dynamic programming. The proposed method also works on continuous scale and rotation parameters; we can match with a scale up to any large value with the same efficiency. Our experiments on ground truth data and a variety of real images and videos show that the proposed method is efficient, accurate and reliable. PMID:26539858

This document presents the results of the Spent Nuclear Fuel Project (SNFP) Information Management Planning Project (IMPP), a short-term project that identified information management (IM) issues and opportunities within the SNFP and outlined a high-level plan to address them. This high-level plan for the SNMFP IM focuses on specific examples from within the SNFP. The plan`s recommendations can be characterized in several ways. Some recommendations address specific challenges that the SNFP faces. Others form the basis for making smooth transitions in several important IM areas. Still others identify areas where further study and planning are indicated. The team`s knowledge of developments in the IM industry and at the Hanford Site were crucial in deciding where to recommend that the SNFP act and where they should wait for Site plans to be made. Because of the fast pace of the SNFP and demands on SNFP staff, input and interaction were primarily between the IMPP team and members of the SNFP Information Management Steering Committee (IMSC). Key input to the IMPP came from a workshop where IMSC members and their delegates developed a set of draft IM principles. These principles, described in Section 2, became the foundation for the recommendations found in the transition plan outlined in Section 5. Availability of SNFP staff was limited, so project documents were used as a basis for much of the work. The team, realizing that the status of the project and the environment are continually changing, tried to keep abreast of major developments since those documents were generated. To the extent possible, the information contained in this document is current as of the end of fiscal year (FY) 1995. Programs and organizations on the Hanford Site as a whole are trying to maximize their return on IM investments. They are coordinating IM activities and trying to leverage existing capabilities. However, the SNFP cannot just rely on Sitewide activities to meet its IM requirements

The Department of Energy has selected immobilization for disposal in a repository as one approach for disposing of excess plutonium (1). Materials for immobilizing weapons-grade plutonium for repository disposal must meet the ''spent fuel standard'' by providing a radiation field similar to spent fuel (2). Such a radiation field can be provided by incorporating fission products from high-level waste into the waste form. Experiments were performed to evaluate the feasibility of incorporating high-level waste (HLW) stored at the Idaho Chemical Processing Plant (ICPP) into plutonium dispositioning materials to meet the spent fuel standard. A variety of materials and preparation techniques were evaluated based on prior experience developing waste forms for immobilizing HLW. These included crystalline ceramic compositions prepared by conventional sintering and hot isostatic pressing (HIP), and glass formulations prepared by conventional melting. Because plutonium solubility in silicate melts is limited, glass formulations were intentionally devitrified to partition plutonium into crystalline host phases, thereby allowing increased overall plutonium loading. Samarium, added as a representative rare earth neutron absorber, also tended to partition into the plutonium host phases. Because the crystalline plutonium host phases are chemically more inert, the plutonium is more effectively isolated from the environment, and its attractiveness for proliferation is reduced. In the initial phase of evaluating each material and preparation method, cerium was used as a surrogate for plutonium. For promising materials, additional preparation experiments were performed using plutonium to verify the behavior of cerium as a surrogate. These experiments demonstrated that cerium performed well as a surrogate for plutonium. For the most part, cerium and plutonium partitioned onto the same crystalline phases, and no anomalous changes in oxidation state were observed. The only observed

In 1997, the first two United States Department of Energy (US DOE) highlevel waste tanks (Tanks 17-F and 20-F: Type IV, single shell tanks) were taken out of service (permanently closed) at the Savannah River Site (SRS). In 2012, the DOE plans to remove from service two additional Savannah River Site (SRS) Type IV high-level waste tanks, Tanks 18-F and 19-F. These tanks were constructed in the late 1950's and received low-heat waste and do not contain cooling coils. Operational closure of Tanks 18-F and 19-F is intended to be consistent with the applicable requirements of the Resource Conservation and Recovery Act (RCRA) and the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and will be performed in accordance with South Carolina Department of Health and Environmental Control (SCDHEC). The closure will physically stabilize two 4.92E+04 cubic meter (1.3 E+06 gallon) carbon steel tanks and isolate and stabilize any residual contaminants left in the tanks. The closure will also fill, physically stabilize and isolate ancillary equipment abandoned in the tanks. A Performance Assessment (PA) has been developed to assess the long-term fate and transport of residual contamination in the environment resulting from the operational closure of the F-Area Tank Farm (FTF) waste tanks. Next generation flowable, zero-bleed cementitious grouts were designed, tested, and specified for closing Tanks 18-F and 19-F and for filling the abandoned equipment. Fill requirements were developed for both the tank and equipment grouts. All grout formulations were required to be alkaline with a pH of 12.4 and chemically reduction potential (Eh) of -200 to -400 to stabilize selected potential contaminants of concern. This was achieved by including Portland cement and Grade 100 slag in the mixes, respectively. Ingredients and proportions of cementitious reagents were selected and adjusted, respectively, to support the mass placement strategy developed by closure

Matched filtering is used to search for gravitational waves emitted by inspiralling compact binaries in data from the ground-based interferometers. One of the key aspects of the detection process is the design of a template bank that covers the astrophysically pertinent parameter space. In an earlier paper, we described a template bank that is based on a square lattice. Although robust, we showed that the square placement is overefficient, with the implication that it is computationally more demanding than required. In this paper, we present a template bank based on an hexagonal lattice, which size is reduced by 40% with respect to the proposed square placement. We describe the practical aspects of the hexagonal template bank implementation, its size, and computational cost. We have also performed exhaustive simulations to characterize its efficiency and safeness. We show that the bank is adequate to search for a wide variety of binary systems (primordial black holes, neutron stars, and stellar-mass black holes) and in data from both current detectors (initial LIGO, Virgo and GEO600) as well as future detectors (advanced LIGO and EGO). Remarkably, although our template bank placement uses a metric arising from a particular template family, namely, stationary phase approximation, we show that it can be used successfully with other template families (e.g., Padé resummation and effective one-body approximation). This quality of being effective for different template families makes the proposed bank suitable for a search that would use several of them in parallel (e.g., in a binary black hole search). The hexagonal template bank described in this paper is currently used to search for nonspinning inspiralling compact binaries in data from the Laser Interferometer Gravitational-Wave Observatory (LIGO).

Matched filtering is used to search for gravitational waves emitted by inspiralling compact binaries in data from the ground-based interferometers. One of the key aspects of the detection process is the design of a template bank that covers the astrophysically pertinent parameter space. In an earlier paper, we described a template bank that is based on a square lattice. Although robust, we showed that the square placement is overefficient, with the implication that it is computationally more demanding than required. In this paper, we present a template bank based on an hexagonal lattice, which size is reduced by 40% with respect to the proposed square placement. We describe the practical aspects of the hexagonal template bank implementation, its size, and computational cost. We have also performed exhaustive simulations to characterize its efficiency and safeness. We show that the bank is adequate to search for a wide variety of binary systems (primordial black holes, neutron stars, and stellar-mass black holes) and in data from both current detectors (initial LIGO, Virgo and GEO600) as well as future detectors (advanced LIGO and EGO). Remarkably, although our template bank placement uses a metric arising from a particular template family, namely, stationary phase approximation, we show that it can be used successfully with other template families (e.g., Pade resummation and effective one-body approximation). This quality of being effective for different template families makes the proposed bank suitable for a search that would use several of them in parallel (e.g., in a binary black hole search). The hexagonal template bank described in this paper is currently used to search for nonspinning inspiralling compact binaries in data from the Laser Interferometer Gravitational-Wave Observatory (LIGO)

As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

The engineering design of disposal of the highlevel waste (HLW) packages in a geologic repository requires a thermal analysis to provide the temperature history of the packages. Calculated temperatures are used to demonstrate compliance with criteria for waste acceptance into the geologic disposal gallery system and as input to assess the transient thermal characteristics of the vitrified HLW Package. The objective of the work was to evaluate the thermal performance of the supercontainer containing the vitrified HLW in a non-backfilled and unventilated underground disposal gallery. In order to achieve the objective, transient computational models for a geologic vitrified HLW package were developed by using a computational fluid dynamics method, and calculations for the HLW disposal gallery of the current Belgian geological repository reference design were performed. An initial two-dimensional model was used to conduct some parametric sensitivity studies to better understand the geologic system's thermal response. The effect of heat decay, number of co-disposed supercontainers, domain size, humidity, thermal conductivity and thermal emissivity were studied. Later, a more accurate three-dimensional model was developed by considering the conduction-convection cooling mechanism coupled with radiation, and the effect of the number of supercontainers (3, 4 and 8) was studied in more detail, as well as a bounding case with zero heat flux at both ends. The modeling methodology and results of the sensitivity studies will be presented.

The purpose of this calculation is to provide a dose consequence analysis of high-level waste (HLW) consisting of plutonium immobilized in vitrified HLW to be handled at the proposed Monitored Geologic Repository at Yucca Mountain for a beyond design basis event (BDBE) under expected conditions using best estimate values for each calculation parameter. In addition to the dose calculation, a plutonium respirable particle size for dose calculation use is derived. The current concept for this waste form is plutonium disks enclosed in cans immobilized in canisters of vitrified HLW (i.e., glass). The plutonium inventory at risk used for this calculation is selected from Plutonium Immobilization Project Input for Yucca Mountain Total Systems Performance Assessment (Shaw 1999). The BDBE examined in this calculation is a nonmechanistic initiating event and the sequence of events that follow to cause a radiological release. This analysis will provide the radiological releases and dose consequences for a postulated BDBE. Results may be considered in other analyses to determine or modify the safety classification and quality assurance level of repository structures, systems, and components. This calculation uses best available technical information because the BDBE frequency is very low (i.e., less than 1.0E-6 events/year) and is not required for License Application for the Monitored Geologic Repository. The results of this calculation will not be used as part of a licensing or design basis.

An evaluation of the optimal filtration conditions was performed based on test data obtained from filtration of a HighLevel Waste Sludge sample from the Hanford tank farms. This evaluation was performed using the anticipated configuration for the Waste Treatment Plant at the Hanford site. Testing was performed to identify the optimal pressure drop and cross flow velocity for filtration at both high and low solids loading. However, this analysis indicates that the actual filtration rate achieved is relatively insensitive to these conditions under anticipated operating conditions. The maximum filter flux was obtained by adjusting the system control valve pressure from 400 to 650 kPa while the filter feed concentration increased from 5 to 20 wt%. However, operating the system with a constant control valve pressure drop of 500 kPa resulted in a less than 1% reduction in the average filter flux. Also note that allowing the control valve pressure to swing as much as +/- 20% resulted in less than a 5% decrease in filter flux.

For solar photovoltaic (PV) and wind resources, the capacity factor is an important parameter describing the quality of the resource. As the share of variable renewable resources (such as PV and wind) on the electric system is increasing, so does curtailment (and the fraction of time when it cannot be avoided). At highlevels of renewable generation, curtailments effectively change the practical measure of resource quality from capacity factor to the incremental capacity factor. The latter accounts only for generation during hours of no curtailment and is directly connected with the marginal capital cost of renewable generators for a givenmore » level of renewable generation during the year. The Western U.S. wind generation is analyzed hourly for a system with 75% of annual generation from wind, and it is found that the value for the system of resources with equal capacity factors can vary by a factor of 2, which highlights the importance of using the incremental capacity factor instead. Finally, the effect is expected to be more pronounced in smaller geographic areas (or when transmission limitations imposed) and less pronounced at lower levels of renewable energy in the system with less curtailment.« less

The ability to effectively mix, sample, certify, and deliver consistent batches of HighLevel Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE’s River Protection Project (RPP) mission modeling and WTP facility modeling assume that individual 3785 cubic meter (1 million gallon) HLW feed tanks are homogenously mixed, representatively sampled, and consistently delivered to the WTP. It has been demonstrated that homogenous mixing of HLW sludge in Hanford DSTs is not likely achievable with the baseline design thereby causing representative sampling and consistent feed delivery to be more difficult. Inconsistent feed to the WTP could cause additional batch to batch operational adjustments that reduces operating efficiency and has the potential to increase the overall mission length. The Hanford mixing and sampling demonstration program will identify DST mixing performance capability, will evaluate representative sampling techniques, and will estimate feed batch consistency. An evaluation of demonstration program results will identify potential mission improvement considerations that will help ensure successful mission completion. This paper will discuss the history, progress, and future activities that will define and mitigate the mission risk.

The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a highlevel language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.

This report discusses the Accelerator Transmutation of Waste (ATW) concept which aims at destruction of key long-lived radionuclides in high-level nuclear waste (HLW), both fission products and actinides. This focus makes it different from most other transmutation concepts which concentrate primarily on actinide burning. The ATW system uses an accelerator-driven, sub-critical assembly to create an intense thermal neutron environment for radionuclide transmutation. This feature allows rapid transmutation under low-inventory system conditions, which in turn, has a direct impact on the size of chemical separations and materials handling components of the system. Inventories in ATW are factors of eight to thirty times smaller than reactor systems of equivalent thermal power. Chemical separations systems are relatively small in scale and can be optimized to achieve high decontamination factors and minimized waste streams. The low-inventory feature also directly impacts material amounts remaining in the system at its end of life. In addition to its low-inventory operation, the accelerator-driven neutron source features of ATW are key to providing a sufficient level of neutrons to allow transmutation of long-lived fission products.

GRAVITY is the four-beam, near-infrared, AO-assisted, fringe tracking, astrometric and imaging instrument for the Very Large Telescope Interferometer (VLTI). It is requiring the development of one of the most complex instrument software systems ever built for an ESO instrument. Apart from its many interfaces and interdependencies, one of the most challenging aspects is the overall performance and stability of this complex system. The three infrared detectors and the fast reflective memory network (RMN) recorder contribute a total data rate of up to 20 MiB/s accumulating to a maximum of 250 GiB of data per night. The detectors, the two instrument Local Control Units (LCUs) as well as the five LCUs running applications under TAC (Tools for Advanced Control) architecture, are interconnected with fast Ethernet, RMN fibers and dedicated fiber connections as well as signals for the time synchronization. Here we give a simplified overview of all subsystems of GRAVITY and their interfaces and discuss two examples of high-level applications during observations: the acquisition procedure and the gathering and merging of data to the final FITS file.

The purpose of the research was to assess anthropometric status of European high-level junior basketball players and to determine anthropometric differences between the players playing in different game positions (guards, forwards, centers). The sample consisted of 132 young basketball players, participants of the European Junior Basketball Championship, Zadar, 2000. Participants were measured with 31 measures (anthropometric variables), on the basis of which two body composition measures (BMI and relative body fat) and somatotype were calculated. The basic statistical parameters were computed. The analysis of variance and discriminant canonical analysis were employed to determine the differences between positions in play. Results indicate that prominent longitudinal and transversal skeletal dimensions as well as circumference measures characterize players on the position of centers, but they do not have significantly larger skinfold measures in relation to forwards. Centers are also predominantly ectomorphic compared with other players. Guards achieved significantly lower values in all spaces and they are predominantly mesomorphic. Further investigations are necessary in order to assess potential changes in status of these parameters when the participants will reach the age of senior players and afterwards, as well as to determine relations between anthropometric status and skill related variables. PMID:12674837

Fluorescence labeling of bacterial pathogens has a broad range of interesting applications including the observation of living bacteria within host cells. We constructed a novel vector based on the E. coli streptococcal shuttle plasmid pAT28 that can propagate in numerous bacterial species from different genera. The plasmid harbors a promoterless copy of the green fluorescent variant gene egfp under the control of the CAMP-factor gene (cfb) promoter of Streptococcus agalactiae and was designated pBSU101. Upon transfer of the plasmid into streptococci, the bacteria show a distinct and easily detectable fluorescence using a standard fluorescence microscope and quantification by FACS-analysis demonstrated values that were 10–50 times increased over the respective controls. To assess the suitability of the construct for high efficiency fluorescence labeling in different gram-positive pathogens, numerous species were transformed. We successfully labeled Streptococcus pyogenes, Streptococcus agalactiae, Streptococcus dysgalactiae subsp. equisimilis, Enterococcus faecalis, Enterococcus faecium, Streptococcus mutans, Streptococcus anginosus and Staphylococcus aureus strains utilizing the EGFP reporter plasmid pBSU101. In all of these species the presence of the cfb promoter construct resulted in high-level EGFP expression that could be further increased by growing the streptococcal and enterococcal cultures under high oxygen conditions through continuous aeration. PMID:21731607

Chlorine radicals can function as a strong atmospheric oxidant, particularly in polar regions, where levels of hydroxyl radicals are low. In the atmosphere, chlorine radicals expedite the degradation of methane and tropospheric ozone, and the oxidation of mercury to more toxic forms. Here we present direct measurements of molecular chlorine levels in the Arctic marine boundary layer in Barrow, Alaska, collected in the spring of 2009 over a six-week period using chemical ionization mass spectrometry. We report highlevels of molecular chlorine, of up to 400 pptv. Concentrations peaked in the early morning and late afternoon, and fell to near-zero levels at night. Average daytime molecular chlorine levels were correlated with ozone concentrations, suggesting that sunlight and ozone are required for molecular chlorine formation. Using a time-dependent box model, we estimate that the chlorine radicals produced from the photolysis of molecular chlorine oxidized more methane than hydroxyl radicals, on average, and enhanced the abundance of short-lived peroxy radicals. Elevated hydroperoxyl radical levels, in turn, promoted the formation of hypobromous acid, which catalyses mercury oxidation and the breakdown of tropospheric ozone. We therefore suggest that molecular chlorine exerts a significant effect on the atmospheric chemistry of the Arctic.

The ALICE HighLevel Trigger (HLT) is an online reconstruction, triggering and data compression system used in the ALICE experiment at CERN. Unique among the LHC experiments, it extensively uses modern coprocessor technologies like general purpose graphic processing units (GPGPU) and field programmable gate arrays (FPGA) in the data flow. Realtime data compression is performed using a cluster finder algorithm implemented on FPGA boards. These data, instead of raw clusters, are used in the subsequent processing and storage, resulting in a compression factor of around 4. Track finding is performed using a cellular automaton and a Kalman filter algorithm on GPGPU hardware, where both CUDA and OpenCL technologies can be used interchangeably. The ALICE upgrade requires further development of online concepts to include detector calibration and stronger data compression. The current HLT farm will be used as a test bed for online calibration and both synchronous and asynchronous processing frameworks already before the upgrade, during Run 2. For opportunistic use as a Grid computing site during periods of inactivity of the experiment a virtualisation based setup is deployed.

The Defense High-Level Waste Leaching Mechanisms Program brought six major US laboratories together for three years of cooperative research. The participants reached a consensus that solubility of the leached glass species, particularly solubility in the altered surface layer, is the dominant factor controlling the leaching behavior of defense waste glass in a system in which the flow of leachant is constrained, as it will be in a deep geologic repository. Also, once the surface of waste glass is contacted by ground water, the kinetics of establishing solubility control are relatively rapid. The concentrations of leached species reach saturation, or steady-state concentrations, within a few months to a year at 70 to 90/sup 0/C. Thus, reaction kinetics, which were the main subject of earlier leaching mechanisms studies, are now shown to assume much less importance. The dominance of solubility means that the leach rate is, in fact, directly proportional to ground water flow rate. Doubling the flow rate doubles the effective leach rate. This relationship is expected to obtain in most, if not all, repository situations.

The SYNROC method for immobilization of high-level nuclear reactor wastes is currently being applied to US defense wastes in tank storage at Savannah River, South Carolina. The minerals zirconolite, perovskite, and hollandite are used in SYNROC D formulations to immobilize fission products and actinides that comprise up to 10% of defense waste sludges and coexisting solutions. Additional phases in SYNROC D are nepheline, the host phase for sodium; and spinel, the host for excess aluminum and iron. Up to 70 wt % of calcined sludge can be incorporated with 30 wt % of SYNROC additives to produce a waste form consisting of 10% nepheline, 30% spinel, and approximately 20% each of the radioactive waste-bearing phases. Urea coprecipitation and spray drying/calcining methods have been used in the laboratory to produce homogeneous, reactive ceramic powders. Hot pressing and sintering at temperatures from 1000 to 1100/sup 0/C result in waste form products with greater than 97% of theoretical density. Hot isostatic pressing has recently been implemented as a processing alternative. Characterization of waste-form mineralogy has been done by means of XRD, SEM, and electron microprobe. Leaching of SYNROC D samples is currently being carried out. Assessment of radiation damage effects and physical properties of SYNROC D will commence in FY 81.

Large areas of the deep seabed warrant assessment as potential disposal sites for high-level radioactive waste because: (1) they are far from seismically and tectonically active lithospheric plate boundaries; (2) they are far from active or young volcanos; (3) they contain thick layers of very uniform fine-grained clays; (4) they are devoid of natural resources likely to be exploited in the forseeable future; (5) the geologic and oceanographic processes governing the deposition of sediments in such areas are well understood, and are remarkably insensitive to past oceanographic and climatic changes; and (6) sedmentary records of tens of millions of years of slow, uninterrupted deposition of fine grained clay support predictions of the future stability of such sites. Data accumulated to date on the permeability, ion-retardation properties, and mechanical strength of pelagic clay sediments indicate that they can act as a primary barrier to the escape of buried nuclides. Work in progress should determine within the current decade whether subseabed disposal is environmentally acceptable and technically feasible, as well as address the legal, political and social issues raised by this new concept.

For solar photovoltaic (PV) and wind resources, the capacity factor is an important parameter describing the quality of the resource. As the share of variable renewable resources (such as PV and wind) on the electric system is increasing, so does curtailment (and the fraction of time when it cannot be avoided). At highlevels of renewable generation, curtailments effectively change the practical measure of resource quality from capacity factor to the incremental capacity factor. The latter accounts only for generation during hours of no curtailment and is directly connected with the marginal capital cost of renewable generators for a given level of renewable generation during the year. The Western U.S. wind generation is analyzed hourly for a system with 75% of annual generation from wind, and it is found that the value for the system of resources with equal capacity factors can vary by a factor of 2, which highlights the importance of using the incremental capacity factor instead. Finally, the effect is expected to be more pronounced in smaller geographic areas (or when transmission limitations imposed) and less pronounced at lower levels of renewable energy in the system with less curtailment.

Attenuation of high-level acoustic impulses (noise reduction) by various types of earmuffs was measured using a laboratory source of type A impulses and an artificial test fixture compatible with the ISO 4869-3 standard. The measurements were made for impulses of peak sound-pressure levels (SPLs) from 150 to 170 dB. The rise time and A duration of the impulses depended on their SPL and were within a range of 12-400 mus (rise time) and 0.4-1.1 ms (A duration). The results showed that earmuff peak level attenuation increases by about 10 dB when the impulse's rise time and the A duration are reduced. The results also demonstrated that the signals under the earmuff cup have a longer rise and A duration than the original impulses recorded outside the earmuff. Results of the measurements were used to check the validity of various hearing damage risk criteria that specify the maximum permissible exposure to impulse noise. The present data lead to the conclusion that procedures in which hearing damage risk is assessed only from signal attenuation, without taking into consideration changes in the signal waveform under the earmuff, tend to underestimate the risk of hearing damage. PMID:17902846

The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with an output rate of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the HighLevel Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1750 physical nodes each equipped with up to 4 TB local storage space. This work describes the LHCb online system with an emphasis on the developments implemented during the current long shutdown (LS1). We will elaborate the architecture to treble the available CPU power of the HLT farm and the technicalities to determine and verify precise calibration and alignment constants which are fed to the HLT event selection procedure. We will describe how the constants are fed into a two stage HLT event selection facility using extensively the local disk buffering capabilities on the worker nodes. With the installed disk buffers, the CPU resources can be used during periods of up to ten days without beams. These periods in the past accounted to more than 70% of the total time.

The hemipelvectomy, most commonly performed for pelvic tumor resection, is one of the most technically demanding and invasive surgical procedures performed today. Adequate soft tissue coverage and wound complications after hemipelvectomy are important considerations. Rehabilitation after hemipelvectomy is optimally managed by a multidisciplinary integrated team. Understanding the functional outcomes for this population assists the rehabilitation team to counsel patients, plan goals, and determine discharge needs. The most important rehabilitation goal is the optimal restoration of the patient's functional independence. Factors such as age, sex, etiology, level of amputation, and general health play important roles in determining prosthetic use. The three main criteria for successful prosthetic rehabilitation of patients with high-level amputation are comfort, function, and cosmesis. Recent advances in hip and knee joints have contributed to increased function. Prosthetic use after hemipelvectomy improves balance and decreases the need for a gait aid. Using a prosthesis helps maintain muscle strength and tone, cardiovascular health, and functional mobility. With new advances in prosthetic components, patients are choosing to use their prostheses for primary mobility. PMID:24508940

The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

In this paper, we propose a new nonlinear matching measure for automatic analysis of the on-off type DNA microarray images in which the hybridized spots are detected by the templatematching method. The targeting spots of HPV DNA chips are designed for genotyping the human papilloma virus(HPV). The proposed measure is obtained by binarythresholding over the whole template region and taking the number of white pixels inside the spotted area. This measure is evaluated in terms of the accuracy of the estimated marker location to show better performance than the normalized covariance.

The current baseline assumption is that packaging ¡§as is¡¨ and direct disposal of highlevel waste (HLW) calcine in a Monitored Geologic Repository will be allowed. The fall back position is to develop a stabilized waste form for the HLW calcine, that will meet repository waste acceptance criteria currently in place, in case regulatory initiatives are unsuccessful. A decision between direct disposal or a stabilization alternative is anticipated by June 2006. The purposes of this Engineering Design File (EDF) are to provide a pre-conceptual design on three low temperature processes under development for stabilization of highlevel waste calcine (i.e., the grout, hydroceramic grout, and iron phosphate ceramic processes) and to support a down selection among the three candidates. The key assumptions for the pre-conceptual design assessment are that a) a waste treatment plant would operate over eight years for 200 days a year, b) a design processing rate of 3.67 m3/day or 4670 kg/day of HLW calcine would be needed, and c) the performance of waste form would remove the HLW calcine from the hazardous waste category, and d) the waste form loadings would range from about 21-25 wt% calcine. The conclusions of this EDF study are that: (a) To date, the grout formulation appears to be the best candidate stabilizer among the three being tested for HLW calcine and appears to be the easiest to mix, pour, and cure. (b) Only minor differences would exist between the process steps of the grout and hydroceramic grout stabilization processes. If temperature control of the mixer at about 80„aC is required, it would add a major level of complexity to the iron phosphate stabilization process. (c) It is too early in the development program to determine which stabilizer will produce the minimum amount of stabilized waste form for the entire HLW inventory, but the volume is assumed to be within the range of 12,250 to 14,470 m3. (d) The stacked vessel height of the hot process vessels

Highlevel disinfection (HLD) of the gastrointestinal (GI) endoscope is not simply a slogan, but rather is a form of experimental monitoring-based medicine. By definition, GI endoscopy is a semicritical medical device. Hence, such medical devices require major quality assurance for disinfection. And because many of these items are temperature sensitive, low-temperature chemical methods, such as liquid chemical germicide, must be used rather than steam sterilization. In summarizing guidelines for infection prevention and control for GI endoscopy, there are three important steps that must be highlighted: manual washing, HLD with automated endoscope reprocessor, and drying. Strict adherence to current guidelines is required because compared to any other medical device, the GI endoscope is associated with more outbreaks linked to inadequate cleaning or disinfecting during HLD. Both experimental evaluation on the surveillance bacterial cultures and in-use clinical results have shown that, the monitoring of the stringent processes to prevent and control infection is an essential component of the broader strategy to ensure the delivery of safe endoscopy services, because endoscope reprocessing is a multistep procedure involving numerous factors that can interfere with its efficacy. Based on our years of experience in the surveillance of culture monitoring of endoscopic reprocessing, we aim in this study to carefully describe what details require attention in the GI endoscopy disinfection and to share our experience so that patients can be provided with high quality and safe medical practices. Quality management encompasses all aspects of pre- and post-procedural care including the efficiency of the endoscopy unit and reprocessing area, as well as the endoscopic procedure itself. PMID:25699232

The purpose of the study was to assess a large representative sample of cancer patients on distress levels, common psychosocial problems, and awareness and use of psychosocial support services. A total of 3095 patients were assessed over a 4-week period with the Brief Symptom Inventory-18 (BSI-18), a common problems checklist, and on awareness and use of psychosocial resources. Full data was available on 2776 patients. On average, patients were 60 years old, Caucasian (78.3%), and middle class. Approximately, half were attending for follow-up care. Types of cancer varied, with the largest groups being breast (23.5%), prostate (16.9%), colorectal (7.5%), and lung (5.8%) cancer patients. Overall, 37.8% of all patients met criteria for general distress in the clinical range. A higher proportion of men met case criteria for somatisation, and more women for depression. There were no gender differences in anxiety or overall distress severity. Minority patients were more likely to be distressed, as were those with lower income, cancers other than prostate, and those currently on active treatment. Lung, pancreatic, head and neck, Hodgkin's disease, and brain cancer patients were the most distressed. Almost half of all patients who met distress criteria had not sought professional psychosocial support nor did they intend to in the future. In conclusion, distress is very common in cancer patients across diagnoses and across the disease trajectory. Many patients who report highlevels of distress are not taking advantage of available supportive resources. Barriers to such use, and factors predicting distress and use of psychosocial care, require further exploration. PMID:15162149

High-level waste (HLW) glass compositions, processing schemes, limits on waste content, and corrosion/dissolution release models are dependent on an accurate knowledge of melting temperatures and thermochemical values. Unfortunately, existing models for predicting these temperatures are empirically-based, depending on extrapolations of experimental information. In addition, present models of leaching behavior of glass waste forms use simplistic assumptions or experimentally measured values obtained under non-realistic conditions. There is thus a critical need for both more accurate and more widely applicable models for HLW glass behavior, which this project addressed. Significant progress was made in this project on modeling HLW glass. Borosilicate glass was accurately represented along with the additional important components that contain iron, lithium, potassium, magnesium, and calcium. The formation of crystalline inclusions in the glass, an issue in Hanford HLW formulations, was modeled and shown to be predictive. Thus the results of this work have already demonstrated practical benefits with the ability to map compositional regions where crystalline material forms, and therefore avoid that detrimental effect. With regard to a fundamental understanding, added insights on the behavior of the components of glass have been obtained, including the potential formation of molecular clusters. The EMSP project had very significant effects beyond the confines of Environmental Management. The models developed for glass have been used to solve a very costly problem in the corrosion of refractories for glass production. The effort resulted in another laboratory, Sandia National Laboratories-Livermore, to become conversant in the techniques and to apply those through a DOE Office of Industrial Technologies project joint with PPG Industries. The glass industry as a whole is now cognizant of these capabilities, and there is a Glass Manufacturer's Research Institute proposal

This report is a review of waste form options for the immobilization of high-level-liquid wastes from the nuclear fuel cycle. This review covers the status of international research and development on waste forms as of May 1979. Although the emphasis in this report is on waste form properties, process parameters are discussed where they may affect final waste form properties. A summary table is provided listing properties of various nuclear waste form options. It is concluded that proposed waste forms have properties falling within a relatively narrow range. In regard to crystalline versus glass waste forms, the conclusion is that either glass of crystalline materials can be shown to have some advantage when a single property is considered; however, at this date no single waste form offers optimum properties over the entire range of characteristics investigated. A long-term effort has been applied to the development of glass and calcine waste forms. Several additional waste forms have enough promise to warrant continued research and development to bring their state of development up to that of glass and calcine. Synthetic minerals, the multibarrier approach with coated particles in a metal matrix, and high pressure-high temperature ceramics offer potential advantages and need further study. Although this report discusses waste form properties, the total waste management system should be considered in the final selection of a waste form option. Canister design, canister materials, overpacks, engineered barriers, and repository characteristics, as well as the waste form, affect the overall performance of a waste management system. These parameters were not considered in this comparison.

Chlorine radicals are a strong atmospheric oxidant, particularly in polar regions where levels of hydroxyl radicals can be quite low. In the atmosphere, chlorine radicals expedite the degradation of methane and tropospheric ozone and the oxidation of mercury to more toxic forms. Here, we present direct measurements of molecular chlorine levels in the Arctic marine boundary layer in Barrow, Alaska, collected in the spring of 2009 over a six-week period using chemical ionization mass spectrometry. We detected highlevels of molecular chlorine of up to 400 pptv. Concentrations peaked in the early morning and late afternoon and fell to near-zero levels at night. Average daytime molecular chlorine levels were correlated with ozone concentrations, suggesting that sunlight and ozone are required for molecular chlorine formation. Using a time-dependent box model, we estimated that the chlorine radicals produced from the photolysis of molecular chlorine on average oxidized more methane than hydroxyl radicals and enhanced the abundance of short-lived peroxy radicals. Elevated hydroperoxyl radical levels, in turn, promoted the formation of hypobromous acid, which catalyzed mercury oxidation and the breakdown of tropospheric ozone. Therefore, we propose that molecular chlorine exerts a significant effect on the atmospheric chemistry in the Arctic. While the formation mechanisms of molecular chlorine are not yet understood, the main potential sources of chlorine include snowpack, sea salt, and sea ice. There is recent evidence of molecular halogen (Br2 and Cl2) formation in the Arctic snowpack. The coverage and composition of the snow may control halogen chemistry in the Arctic. Changes of sea ice and snow cover in the changing climate may affect air-snow-ice interaction and have a significant impact on the levels of radicals, ozone, mercury and methane in the Arctic troposphere.

In accordance with the Nuclear Waste Policy Amendments Act of 1987, Yucca Mountain was designated as the site to be investigated as a potential repository for the disposal of high-level radioactive waste. The Yucca Mountain site is an undeveloped area located on the southwestern edge of the Nevada Test Site (NTS), about 100 miles northwest of Las Vegas. The site currently lacks rail service or an existing right-of-way. If the Yucca Mountain site is found suitable for the repository, rail service is desirable to the Office of Civilian Waste Management (OCRWM) Program because of the potential of rail transportation to reduce costs and to reduce the number of shipments relative to highway transportation. A Preliminary Rail Access Study evaluated 13 potential rail spur options. Alternative routes within the major options were also developed. Each of these options was then evaluated for potential land use conflicts and access to regional rail carriers. Three potential routes having few land use conflicts and having access to regional carriers were recommended for further investigation. Figure 1-1 shows these three routes. The Jean route is estimated to be about 120 miles long, the Carlin route to be about 365 miles long, and Caliente route to be about 365 miles long. The remaining ten routes continue to be monitored and should any of the present conflicts change, a re-evaluation of that route will be made. Complete details of the evaluation of the 13 routes can be found in the previous study. The DOE has not identified any preferred route and recognizes that the transportation issues need a full and open treatment under the National Environmental Policy Act. The issue of transportation will be included in public hearings to support development of the Environmental Impact Statement (EIS) proceedings for either the Monitored Retrievable Storage Facility or the Yucca Mountain Project or both.

Preliminary evaluation of deep borehole disposal of high-level radioactive waste and spent nuclear fuel indicates the potential for excellent long-term safety performance at costs competitive with mined repositories. Significant fluid flow through basement rock is prevented, in part, by low permeabilities, poorly connected transport pathways, and overburden self-sealing. Deep fluids also resist vertical movement because they are density stratified. Thermal hydrologic calculations estimate the thermal pulse from emplaced waste to be small (less than 20 C at 10 meters from the borehole, for less than a few hundred years), and to result in maximum total vertical fluid movement of {approx}100 m. Reducing conditions will sharply limit solubilities of most dose-critical radionuclides at depth, and high ionic strengths of deep fluids will prevent colloidal transport. For the bounding analysis of this report, waste is envisioned to be emplaced as fuel assemblies stacked inside drill casing that are lowered, and emplaced using off-the-shelf oilfield and geothermal drilling techniques, into the lower 1-2 km portion of a vertical borehole {approx}45 cm in diameter and 3-5 km deep, followed by borehole sealing. Deep borehole disposal of radioactive waste in the United States would require modifications to the Nuclear Waste Policy Act and to applicable regulatory standards for long-term performance set by the US Environmental Protection Agency (40 CFR part 191) and US Nuclear Regulatory Commission (10 CFR part 60). The performance analysis described here is based on the assumption that long-term standards for deep borehole disposal would be identical in the key regards to those prescribed for existing repositories (40 CFR part 197 and 10 CFR part 63).

Mounting environmental concerns associated with the use of petroleum-based chemical manufacturing practices has generated significant interest in the development of biological alternatives for the production of propionate. However, biological platforms for propionate production have been limited to strict anaerobes, such as Propionibacteria and select Clostridia. In this work, we demonstrated high-level heterologous production of propionate under microaerobic conditions in engineered Escherichia coli. Activation of the native Sleeping beauty mutase (Sbm) operon not only transformed E. coli to be propionogenic (i.e., propionate-producing) but also introduced an intracellular "flux competition" between the traditional C2-fermentative pathway and the novel C3-fermentative pathway. Dissimilation of the major carbon source of glycerol was identified to critically affect such "flux competition" and, therefore, propionate synthesis. As a result, the propionogenic E. coli was further engineered by inactivation or overexpression of various genes involved in the glycerol dissimilation pathways and their individual genetic effects on propionate production were investigated. Generally, knocking out genes involved in glycerol dissimilation (except glpA) can minimize levels of solventogenesis and shift more dissimilated carbon flux toward the C3-fermentative pathway. For optimal propionate production with high C3:C2-fermentative product ratios, glycerol dissimilation should be channeled through the respiratory pathway and, upon suppressed solventogenesis with minimal production of highly reduced alcohols, the alternative NADH-consuming route associated with propionate synthesis can be critical for more flexible redox balancing. With the implementation of various biochemical and genetic strategies, high propionate titers of more than 11 g/L with high yields up to 0.4 g-propionate/g-glycerol (accounting for ~50 % of dissimilated glycerol) were achieved, demonstrating the

This revision of the High-Level Waste (HLW) System Plan aligns SRS HLW program planning with the DOE Savannah River (DOE-SR) Ten Year Plan (QC-96-0005, Draft 8/6), which was issued in July 1996. The objective of the Ten Year Plan is to complete cleanup at most nuclear sites within the next ten years. The two key principles of the Ten Year Plan are to accelerate the reduction of the most urgent risks to human health and the environment and to reduce mortgage costs. Accordingly, this System Plan describes the HLW program that will remove HLW from all 24 old-style tanks, and close 20 of those tanks, by 2006 with vitrification of all HLW by 2018. To achieve these goals, the DWPF canister production rate is projected to climb to 300 canisters per year starting in FY06, and remain at that rate through the end of the program in FY18, (Compare that to past System Plans, in which DWPF production peaked at 200 canisters per year, and the program did not complete until 2026.) An additional $247M (FY98 dollars) must be made available as requested over the ten year planning period, including a one-time $10M to enhance Late Wash attainment. If appropriate resources are made available, facility attainment issues are resolved and regulatory support is sufficient, then completion of the HLW program in 2018 would achieve a $3.3 billion cost savings to DOE, versus the cost of completing the program in 2026. Facility status information is current as of October 31, 1996.

Highlevel disinfection (HLD) of the gastrointestinal (GI) endoscope is not simply a slogan, but rather is a form of experimental monitoring-based medicine. By definition, GI endoscopy is a semicritical medical device. Hence, such medical devices require major quality assurance for disinfection. And because many of these items are temperature sensitive, low-temperature chemical methods, such as liquid chemical germicide, must be used rather than steam sterilization. In summarizing guidelines for infection prevention and control for GI endoscopy, there are three important steps that must be highlighted: manual washing, HLD with automated endoscope reprocessor, and drying. Strict adherence to current guidelines is required because compared to any other medical device, the GI endoscope is associated with more outbreaks linked to inadequate cleaning or disinfecting during HLD. Both experimental evaluation on the surveillance bacterial cultures and in-use clinical results have shown that, the monitoring of the stringent processes to prevent and control infection is an essential component of the broader strategy to ensure the delivery of safe endoscopy services, because endoscope reprocessing is a multistep procedure involving numerous factors that can interfere with its efficacy. Based on our years of experience in the surveillance of culture monitoring of endoscopic reprocessing, we aim in this study to carefully describe what details require attention in the GI endoscopy disinfection and to share our experience so that patients can be provided with high quality and safe medical practices. Quality management encompasses all aspects of pre- and post-procedural care including the efficiency of the endoscopy unit and reprocessing area, as well as the endoscopic procedure itself. PMID:25699232

New radiative lifetime measurements for ~ 50 high lying levels of Fe I are reported. Laboratory astrophysics faces a challenge to provide basic spectroscopic data, especially reliable atomic transition probabilities, in the IR region for abundance studies. The availability of HgCdTe (HAWAII) detector arrays has opened IR spectral regions for extensive new spectroscopic studies. The SDSS III APOGEE project in the H-Band is an important example which will penetrate the dust obscuring the Galactic bulge. APOGEE will survey elemental abundances of 100,000 red giant stars in the bulge, bar, disk, and halo of the Milky Way. Many stellar spectra in the H-Band are, as expected, dominated by transitions of Fe I. Most of these IR transitions connect highlevels of Fe. Our program has started an effort to meet this challenge with new radiative lifetime measurements on high lying levels of Fe I using time resolved laser induced fluorescence (TRLIF). The TRLIF method is typically accurate to 5% and is efficient. Our goal is to combine these accurate, absolute radiative lifetimes with emission branching fractions [1] to determine log(gf) values of the highest quality for Fe I lines in the UV, visible, and IR. This method was used very successfully by O’Brian et al. [2] on lower levels of Fe I. This method is still the best available for all but very simple spectra for which ab-initio theory is more accurate. Supported by NSF grant AST-0907732. [1] Branching fractions are being measured by M. Ruffoni and J. C. Pickering at Imperial College London. [2] O'Brian, T. R., Wickliffe, M. E., Lawler, J. E., Whaling, W., & Brault, J. W. 1991, J. Opt. Soc. Am. B 8, 1185

Maximum motion displacement (Dmax) is the largest dot displacement in a random-dot kinematogram (RDK) at which direction of motion can be correctly discriminated [Braddick, O. (1974). A short-range process in apparent motion. Vision Research, 14, 519-527]. For first-order RDKs, Dmax gets larger as dot size increases and/or dot density decreases. It has been suggested that this increase in Dmax reflects greater involvement of high-level feature-matching motion mechanisms and less dependence on low-level motion detectors [Sato, T. (1998). Dmax: Relations to low- and high-level motion processes. In T. Watanabe (Ed.), High-level motion processing, computational, neurobiological, and psychophysical perspectives (pp. 115-151). Boston: MIT Press]. Recent psychophysical findings [Ho, C. S., & Giaschi, D. E. (2006). Deficient maximum motion displacement in amblyopia. Vision Research, 46, 4595-4603; Ho, C. S., & Giaschi, D. E. (2007). Stereopsis-dependent deficits in maximum motion displacement. Vision Research, 47, 2778-2785] suggest that this "switch" from low-level to high-level motion processing is also observed in children with anisometropic and strabismic amblyopia as RDK dot size is increased and/or dot density is decreased. However, both high- and low-level Dmax were reduced relative to controls. In this study, we used functional MRI to determine the motion-sensitive areas that may account for the reduced Dmax in amblyopia In the control group, low-level RDKs elicited stronger responses in low-level (posterior occipital) areas and high-level RDKs elicited a greater response in high-level (extra-striate occipital-parietal) areas when activation for high-level RDKs was compared to that for low-level RDKs. Participants with anisometropic amblyopia showed the same pattern of cortical activation although extent of activation differences was less than in controls. For those with strabismic amblyopia, there was almost no difference in the cortical activity for low-level and

Searches for gravitational waves (GWs) from binary black holes using interferometric GW detectors require the construction of template banks for performing matched filtering while analyzing the data. Placement of templates over the parameter space of binaries, as well as coincidence tests of GW triggers from multiple detectors make use of the definition of a metric over the space of gravitational waveforms. Although recent searches have employed waveform templates coherently describing the inspiral, merger and ringdown (IMR) of the coalescence, the metric used in the template banks and coincidence tests was derived from post-Newtonian inspiral waveforms. In this paper, we compute (semianalytically) the template-space metric of the IMR waveform family IMRPhenomB over the parameter space of masses and the effective spin parameter. We also propose a coordinate system, which is a modified version of post-Newtonian chirp time coordinates, in which the metric is slowly varying over the parameter space. The match function semianalytically computed using the metric has excellent agreement with the "exact" match function computed numerically. We show that the metric is able to provide a reasonable approximation to the match function of other IMR waveform families, such that the effective-one-body model calibrated to numerical relativity (EOBNRv2). The availability of this metric can contribute to improving the sensitivity of searches for GWs from binary black holes in the advanced detector era.

Cost of ownership of scanners for the manufacturing of front end layers is becoming increasingly expensive. The ability to quickly switch the production of a layer to another scanner in case it is down is important. This paper presents a method to match the scanner grids in the most optimal manner so that use of front end scanners in effect becomes interchangeable. A breakdown of the various components of overlay is given and we discuss methods to optimize the matching strategy in the fab. A concern here is how to separate the scanner and process induced effects. We look at the relative contributions of intrafield and interfield errors caused by the scanner and the process. Experimental results of a method to control the scanner grid are presented and discussed. We compare the overlay results before and after optimizing the scanner grids and show that the matching penalty is reduced by 20%. We conclude with some thoughts on the need to correct the remaining matching errors.

Interimage matching is the process of determining the geometric transformation required to conform spatially one image to another. In principle, the parameters of that transformation are varied until some measure of some difference between the two images is minimized or some measure of sameness (e.g., cross-correlation) is maximized. The number of such parameters to vary is faily large (six for merely an affine transformation), and it is customary to attempt an a priori transformation reducing the complexity of the residual transformation or subdivide the image into small enough match zones (control points or patches) that a simple transformation (e.g., pure translation) is applicable, yet large enough to facilitate matching. In the latter case, a complex mapping function is fit to the results (e.g., translation offsets) in all the patches. The methods reviewed have all chosen one or both of the above options, ranging from a priori along-line correction for line-dependent effects (the high-frequency correction) to a full sensor-to-geobase transformation with subsequent subdivision into a grid of match points.

The Medical Education Commission (MEC) has published Graduate Medical Education (GME) data since 1997, including the National Residency Matching Program (NRMP) and the Supplemental Offer and Acceptance Program (SOAP), and totals all GME in Louisiana for annual publication. The NRMP provides the quotas and filled positions by institution. Following the NRMP, SOAP attempts to place unmatched candidates with slots that are unfilled. The NRMP Fellowship match also comes close to filling quotas and has a significant SOAP. Thus, an accurate number of total filled positions is best obtained in July of the same match year. All GME programs in Louisiana are represented for 2014, and the number trend 2005 to 2014 shows that the only dip was post-Katrina in 2005-2006. The March match after SOAP 2014 is at the peak for both senior medical students and post graduate year one (PGY-1) residents. A significant and similar number stay in Louisiana GME institutions after graduation. Also noteworthy is that a lower percentage are staying in state, due to increased enrollment in all Louisiana medical schools. PMID:27159458

Metal nanowires (NWs) have attracted much attention because of their high electron conductivity, optical transmittance, and tunable magnetic properties. Metal NWs have been synthesized using soft templates such as surface stabilizing molecules and polymers, and hard templates such as anodic aluminum oxide, mesoporous oxide, carbon nanotubes. NWs prepared from hard templates are composites of metals and the oxide/carbon matrix. Thus, selecting appropriate elements can simplify the production of composite devices. The resulting NWs are immobilized and spatially arranged, as dictated by the ordered porous structure of the template. This avoids the NWs from aggregating, which is common for NWs prepared with soft templates in solution. Herein, the hard template synthesis of metal NWs is reviewed, and the resulting structures, properties and potential applications are discussed. PMID:25453031

Metal nanowires (NWs) have attracted much attention because of their high electron conductivity, optical transmittance, and tunable magnetic properties. Metal NWs have been synthesized using soft templates such as surface stabilizing molecules and polymers, and hard templates such as anodic aluminum oxide, mesoporous oxide, carbon nanotubes. NWs prepared from hard templates are composites of metals and the oxide/carbon matrix. Thus, selecting appropriate elements can simplify the production of composite devices. The resulting NWs are immobilized and spatially arranged, as dictated by the ordered porous structure of the template. This avoids the NWs from aggregating, which is common for NWs prepared with soft templates in solution. Herein, the hard template synthesis of metal NWs is reviewed, and the resulting structures, properties and potential applications are discussed. PMID:25453031

Metal nanowires (NWs) have attracted much attention because of their high electron conductivity, optical transmittance and tunable magnetic properties. Metal NWs have been synthesized using soft templates such as surface stabilizing molecules and polymers, and hard templates such as anodic aluminum oxide, mesoporous oxide, carbon nanotubes. NWs prepared from hard templates are composites of metals and the oxide/carbon matrix. Thus, selecting appropriate elements can simplify the production of composite devices. The resulting NWs are immobilized and spatially arranged, as dictated by the ordered porous structure of the template. This avoids the NWs from aggregating, which is common for NWs prepared with soft templates in solution. Herein, the hard template synthesis of metal NWs is reviewed, and the resulting structures, properties and potential applications are discussed.

Implant placement has become a routine modality of dental care.Improvements in surgical reconstructive methods as well as increased prosthetic demands,require a highly accurate diagnosis, planning and placement. Recently,computer-aided design and manufacturing have made it possible to use data from computerised tomography to not only plan implant rehabilitation,but also transfer this information to the surgery.A review on one of this technique called Stereolithography is presented in this article.It permits graphic and complex 3D implant placement and fabrication of stereolithographic surgical templates. Also offers many significant benefits over traditional procedures. PMID:24179955

Implant placement has become a routine modality of dental care.Improvements in surgical reconstructive methods as well as increased prosthetic demands,require a highly accurate diagnosis, planning and placement. Recently,computer-aided design and manufacturing have made it possible to use data from computerised tomography to not only plan implant rehabilitation,but also transfer this information to the surgery.A review on one of this technique called Stereolithography is presented in this article.It permits graphic and complex 3D implant placement and fabrication of stereolithographic surgical templates. Also offers many significant benefits over traditional procedures. PMID:24179955

The enzymatic assay for deoxyribonucleoside triphosphates has been improved by using synthetic oligonucleotides of a carefully defined sequence as template primers for DNA polymerase. High backgrounds, which limit the sensitivity of the assay when calf thymus DNA or alternating copolymers are used as template primers, were eliminated with these oligonucleotide template primers. Sensitivity was further increased by designing the template primer to incorporate multiple labeled deoxyribonucleotides per limiting unlabeled deoxyribonucleotide. Each of several DNA polymerases exhibited unique reaction characteristics with the oligonucleotide template primers, which was attributed to the differing exonuclease activities associated with these various enzymes. Assay optimization therefore included matching the polymerase with the template primer to obtain the lowest background reaction and highest sensitivity. This modified assay is particularly well suited for keeping cell sample size to a minimum in experimental protocols which generate large numbers of data points or require careful timing of sampling. With this technique, we measured the levels of all four deoxyribonucleoside triphosphates in extracts from as few as 2 x 10(4) cultured cells.

The U.S. Department of Energy (DOE) has embarked upon a course to acquire Hanford Site tank waste treatment and immobilization services using privatized facilities (RL 1996a). This plan contains a two-phased approach. Phase I is a proof-of-principle/connnercial demonstration- scale effort and Phase II is a fiill-scale production effort. In accordance with the planned approach, interim storage and disposal of various products from privatized facilities are to be DOE fumished. The high-level waste (BLW) interim storage options, or alternative architectures, were identified and evaluated to provide the framework from which to select the most viable method of Phase I BLW interim storage (Calmus 1996). This evaluation, hereafter referred to as the Alternative Architecture Evaluation, was performed to established performance and risk criteria (technical merit, cost, schedule, etc.). Based on evaluation results, preliminary architectures and path forward reconunendations were provided for consideration in the architecture decision- maldng process. The decision-making process used for selection of a Phase I solidified BLW interim storage architecture was conducted in accordance with an approved Decision Plan (see the attachment). This decision process was based on TSEP-07,Decision Management Procedure (WHC 1995). The established decision process entailed a Decision Board, consisting of Westinghouse Hanford Company (VY`HC) management staff, and included appointment of a VTHC Decision Maker. The Alternative Architecture Evaluation results and preliminary recommendations were presented to the Decision Board members for their consideration in the decision-making process. The Alternative Architecture Evaluation was prepared and issued before issuance of @C-IP- 123 1, Alternatives Generation and Analysis Procedure (WI-IC 1996a), but was deemed by the Board to fully meet the intent of WHC-IP-1231. The Decision Board members concurred with the bulk of the Alternative Architecture

Three Savannah River Laboratory reference high-level waste canisters were subjected to impact tests at the Pacific Northwest Laboratory in Richland, Washington, in June 1983. The purpose of the test was to determine the integrity of the canister, nozzle, and final closure weld and to assess the effects of impacts on the glass. Two of the canisters were fabricated from 304L stainless steel and the third canister from titanium. The titanium canister was subjected to two drops. The first drop was vertical from 9.14 m onto an unyielding surface with the bottom corner of the canister receiving the impact. No failure occurred during this drop. The second drop was vertical from 9.14 m onto an unyielding surface with the corner of the fill nozzle receiving the impact. A large breach in the canister occurred in the region where the fill nozzle joins the dished head. The first stainless steel canister was dropped with the corner of the fill nozzle receiving the impact. The canister showed significant strain with no rupturing in the region where the fill nozzle joins the dished head. The second canister was dropped with the bottom corner receiving the impact and also, dropped horizontally onto an unyielding vertical solid steel cylinder in a puncture test. The bottom drop did not damage the weld and the puncture test did not rupture the canister body. The glass particles in the damaged zone of these canisters were sampled and analyzed for particle size. A comparison was made with control canister in which no impact had occurred. The particle size distribution for the control canisters and the zones of damaged glass were determined down to 1.5 ..mu..m. The quantity of glass fines, smaller than 10 ..mu..m, which must be determined for transportation safety studies, was found to be the largest in the bottom-damaged zone. The total amount of fines smaller than 10 ..mu..m after impact was less than 0.01 wt % of the total amount of glass in the canister.

The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank to ensure uniformity of the discharge stream. Mixing is accomplished with one to four slurry pumps located within the tank liquid. The slurry pump may be fixed in position or they may rotate depending on the specific mixing requirements. The high-level waste in Tank 48 contains insoluble solids in the form of potassium tetraphenyl borate compounds (KTPB), monosodium titanate (MST), and sludge. Tank 48 is equipped with 4 slurry pumps, which are intended to suspend the insoluble solids prior to transfer of the waste to the Fluidized Bed Steam Reformer (FBSR) process. The FBSR process is being designed for a normal feed of 3.05 wt% insoluble solids. A chemical characterization study has shown the insoluble solids concentration is approximately 3.05 wt% when well-mixed. The project is requesting a Computational Fluid Dynamics (CFD) mixing study from SRNL to determine the solids behavior with 2, 3, and 4 slurry pumps in operation and an estimate of the insoluble solids concentration at the suction of the transfer pump to the FBSR process. The impact of cooling coils is not considered in the current work. The work consists of two principal objectives by taking a CFD approach: (1) To estimate insoluble solids concentration transferred from Tank 48 to the Waste Feed Tank in the FBSR process and (2) To assess the impact of different combinations of four slurry pumps on insoluble solids suspension and mixing in Tank 48. For this work, several different combinations of a maximum of four pumps are considered to determine the resulting flow patterns and local flow velocities which are thought to be associated with sludge particle mixing. Two different elevations of pump nozzles are used for an assessment of the flow patterns on the tank mixing. Pump design and operating parameters used for the analysis are summarized in Table 1. The baseline

A framework for high-level specification of data distributions in data-parallel application programs has been conceived. [As used here, distributions signifies means to express locality (more specifically, locations of specified pieces of data) in a computing system composed of many processor and memory components connected by a network.] Inasmuch as distributions exert a great effect on the performances of application programs, it is important that a distribution strategy be flexible, so that distributions can be adapted to the requirements of those programs. At the same time, for the sake of productivity in programming and execution, it is desirable that users be shielded from such error-prone, tedious details as those of communication and synchronization. As desired, the present framework enables a user to refine a distribution type and adjust it to optimize the performance of an application program and conceals, from the user, the low-level details of communication and synchronization. The framework provides for a reusable, extensible, data-distribution design, denoted the design pattern, that is independent of a concrete implementation. The design pattern abstracts over coding patterns that have been found to be commonly encountered in both manually and automatically generated distributed parallel programs. The following description of the present framework is necessarily oversimplified to fit within the space available for this article. Distributions are among the elements of a conceptual data-distribution machinery, some of the other elements being denoted domains, index sets, and data collections (see figure). Associated with each domain is one index set and one distribution. A distribution class interface (where "class" is used in the object-oriented-programming sense) includes operations that enable specification of the mapping of an index to a unit of locality. Thus, "Map(Index)" specifies a unit, while "LocalLayout(Index)" specifies the local address

A modified porous anodic alumina template (PAA) containing a thin CNT catalyst layer directly embedded into the pore walls. CNT synthesis using the template selectively catalyzes SWNTs and DWNTs from the embedded catalyst layer to the top PAA surface, creating a vertical CNT channel within the pores. Subsequent processing allows for easy contact metallization and adaptable functionalization of the CNTs and template for a myriad of applications.

The goal of this work is to demonstrate the application of anodic aluminum oxide (AAO) template as matching layer of ultrasonic transducer. Quarter-wavelength acoustic matching layer is known as a vital component in medical ultrasonic transducers to compensate the acoustic impedance mismatch between piezoelectric element and human body. The AAO matching layer is made of anodic aluminum oxide template filled with epoxy resin, i.e. AAO-epoxy 1-3 composite. Using this composite as the first matching layer, a ∼12MHz ultrasonic transducer based on soft lead zirconate titanate piezoelectric ceramic is fabricated, and pulse-echo measurements show that the transducer exhibits very good performance with broad bandwidth of 68% (-6dB) and two-way insertion loss of -22.7dB. Wire phantom ultrasonic image is also used to evaluate the transducer's performance, and the results confirm the process feasibility and merit of AAO-epoxy composite as a new matching material for ultrasonic transducer application. This matching scheme provides a solution to address the problems existing in the conventional 0-3 composite matching layer and suggests another useful application of AAO template. PMID:27125558

In 1997, the first two United States Department of Energy (US DOE) highlevel waste tanks (Tanks 17-F and 20-F: Type IV, single shell tanks) were taken out of service (permanently closed) at the Savannah River Site (SRS). In 2012, the DOE plans to remove from service two additional Savannah River Site (SRS) Type IV high-level waste tanks, Tanks 18-F and 19-F. These tanks were constructed in the late 1950's and received low-heat waste and do not contain cooling coils. Operational closure of Tanks 18-F and 19-F is intended to be consistent with the applicable requirements of the Resource Conservation and Recovery Act (RCRA) and the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and will be performed in accordance with South Carolina Department of Health and Environmental Control (SCDHEC). The closure will physically stabilize two 4.92E+04 cubic meter (1.3 E+06 gallon) carbon steel tanks and isolate and stabilize any residual contaminants left in the tanks. Ancillary equipment abandoned in the tanks will also be filled to the extent practical. A Performance Assessment (PA) has been developed to assess the long-term fate and transport of residual contamination in the environment resulting from the operational closure of the F-Area Tank Farm (FTF) waste tanks. Next generation flowable, zero-bleed cementitious grouts were designed, tested, and specified for closing Tanks 18-F and 19-F and for filling the abandoned equipment. Fill requirements were developed for both the tank and equipment grouts. All grout formulations were required to be alkaline with a pH of 12.4 and to be chemically reducing with a reduction potential (Eh) of -200 to -400. Grouts with this chemistry stabilize potential contaminants of concern. This was achieved by including Portland cement and Grade 100 slag in the mixes, respectively. Ingredients and proportions of cementitious reagents were selected and adjusted to support the mass placement strategy developed by

Printed electronics can lower the cost and increase the ubiquity of electrical components such as batteries, sensors, and telemetry systems. Unfortunately, the advance of printed electronics has been held back by the limited minimum resolution, aspect ratio, and feature fidelity of present printing techniques such as gravure, screen printing and inkjet printing. Templated dry printing offers a solution to these problems by patterning nanoparticle inks into templates before drying. This dissertation shows advancements in two varieties of templated dry nanoprinting. The first, advective micromolding in vapor-permeable templates (AMPT) is a microfluidic approach that uses evaporation-driven mold filling to create submicron features with a 1:1 aspect ratio. We will discuss submicron surface acoustic wave (SAW) resonators made through this process, and the refinement process in the template manufacturing process necessary to make these devices. We also present modeling techniques that can be applied to future AMPT templates. We conclude with a modified templated dry printing that improves throughput and isolated feature patterning by transferring dry-templated features with laser ablation. This method utilizes surface energy-defined templates to pattern features via doctor blade coating. Patterned and dried features can be transferred to a polymer substrate with an Nd:YAG MOPA fiber laser, and printed features can be smaller than the laser beam width.

Directed self-assembly (DSA) of block copolymers (BCP) is considered a promising patterning approach for the 7 nm node and beyond. Specifically, a grapho-epitaxy process using a cylindrical phase BCP may offer an efficient solution for patterning randomly distributed contact holes with sub-resolution pitches, such as found in via and cut mask levels. In any grapho-epitaxy process, the pattern density impacts the template fill (local BCP thickness inside the template) and may cause defects due to respectively over- or underfilling of the template. In order to tackle this issue thoroughly, the parameters that determine template fill and the influence of template fill on the resulting pattern should be investigated. In this work, using three process flow variations (with different template surface energy), template fill is experimentally characterized as a function of pattern density and film thickness. The impact of these parameters on template fill is highly dependent on the process flow, and thus pre-pattern surface energy. Template fill has a considerable effect on the pattern transfer of the DSA contact holes into the underlying layer. Higher fill levels give rise to smaller contact holes and worse critical dimension uniformity. These results are important towards DSA-aware design and show that fill is a crucial parameter in grapho-epitaxy DSA.

A subsea template is installed by a method which includes the steps of securing the template in a position beneath the deck of a semi-submersible drilling vessel, moving the semi-submersible drilling vessel to an appropriate offshore site and subsequently lowering the template from the semi-submersible to the sea bed. In addition, at least three anchorage templates may be loaded onto one or both of the pontoons of the semi-submersible drilling vessel at its original position and are subsequently lowered from the pontoons to their respective locations on the sea bed after the semi-submersible has moved to the offshore site.

We investigate ice templating of aqueous dispersions of polymer coated colloids and crosslinkers, at particle concentrations far below that required to form percolated monoliths. Freezing the aqueous dispersions forces the particles into close proximity to form clusters, that are held together as the polymer chains coating the particles are crosslinked. We observe that, with an increase in the particle concentration from about 10(6) to 10(8) particles per ml, there is a transition from isolated single particles to increasingly larger clusters. In this concentration range, most of the colloidal clusters formed are linear or sheet like particle aggregates. Remarkably, the cluster size distribution for clusters smaller than about 30 particles, as well as the size distribution of linear clusters, is only weakly dependent on the dispersion concentration in the range that we investigate. We demonstrate that the main features of cluster formation are captured by kinetic simulations that do not consider hydrodynamics or instabilities at the growing ice front due to particle concentration gradients. Thus, clustering of colloidal particles by ice templating dilute dispersions appears to be governed only by particle exclusion by the growing ice crystals that leads to their accumulation at ice crystal boundaries. PMID:26780838

We address the problem of entanglement matching in the probabilistic teleportation scheme by considering two independent levels of entanglement in the measurement basis. The probability of a successful teleportation has an upper bound which only depends on the amount of entanglement of the quantum channel. However, we found that each entanglement of the measurement basis contributes independently to the success probability as long as it is weaker than the entanglement of the channel. Accordingly, the teleportation process reaches its optimal probability when both entanglements of the measurement basis match the entanglement of the channel. Additionally, we study the probabilistic scheme for extracting an unknown state from a partially known state. We characterize the success probability and the concurrence involved in that process.

The U.S. Department of Energy Office of River Protection (ORP) has implemented an integrated program to increase the loading of Hanford tank wastes in glass while meeting melter lifetime expectancies and process, regulatory, and product quality requirements. The integrated ORP program is focused on providing a technical, science-based foundation from which key decisions can be made regarding the successful operation of the Hanford Tank Waste Treatment and Immobilization Plant (WTP) facilities. The fundamental data stemming from this program will support development of advanced glass formulations, key process control models, and tactical processing strategies to ensure safe and successful operations for both the low-activity waste (LAW) and high-level waste (HLW) vitrification facilities with an appreciation toward reducing overall mission life. The purpose of this advanced HLW glass research and development plan is to identify the near-, mid-, and longer-term research and development activities required to develop and validate advanced HLW glasses and their associated models to support facility operations at WTP, including both direct feed and full pretreatment flowsheets. This plan also integrates technical support of facility operations and waste qualification activities to show the interdependence of these activities with the advanced waste glass (AWG) program to support the full WTP mission. Figure ES-1 shows these key ORP programmatic activities and their interfaces with both WTP facility operations and qualification needs. The plan is a living document that will be updated to reflect key advancements and mission strategy changes. The research outlined here is motivated by the potential for substantial economic benefits (e.g., significant increases in waste throughput and reductions in glass volumes) that will be realized when advancements in glass formulation continue and models supporting facility operations are implemented. Developing and applying advanced

Accurate automated alignment of laser beams in the National Ignition Facility (NIF) is essential for achieving extreme temperature and pressure required for inertial confinement fusion. The alignment achieved by the integrated control systems relies on algorithms processing video images to determine the position of the laser beam images in real time. Alignment images that exhibit wide variations in beam quality require a matched-filter algorithm for position detection. One challenge in designing a matched-filter-based algorithm is to construct a filter template that is resilient to variations in imaging conditions while guaranteeing accurate position determination. A second challenge is to process images for thousands of templates in under a second, as may be required in future high-energy laser systems. This paper describes the development of a new analytical template that captures key recurring features present in the beam image to accurately estimate the beam position under good image quality conditions. Depending on the features present in a particular beam, the analytical template allows us to create a highly tailored template containing only those selected features. The second objective is achieved by exploiting the parallelism inherent in the algorithm to accelerate processing using parallel hardware that provides significant performance improvement over conventional processors. In particular, a Xilinx Virtex II Pro field programmable gate array (FPGA) hardware implementation processing 32 templates provided a speed increase of about 253 times over an optimized software implementation running on a 2.2 GHz AMD Opteron core. PMID:19767937

In this dissertation different aspects of corrosion and electrochemistry of copper, candidate canister material in Scandinavian high-level nuclear waste disposal program, including the thermodynamics and kinetics of the reactions that are predicted to occur in the practical system have been studied. A comprehensive thermodynamic study of copper in contact with granitic groundwater of the type and composition that is expected in the Forsmark repository in Sweden has been performed. Our primary objective was to ascertain whether copper would exist in the thermodynamically immune state in the repository, in which case corrosion could not occur and the issue of corrosion in the assessment of the storage technology would be moot. In spite of the fact that metallic copper has been found to exist for geological times in granitic geological formations, copper is well-known to be activated from the immune state to corrode by specific species that may exist in the environment. The principal activator of copper is known to be sulfur in its various forms, including sulfide (H2S, HS-, S2-), polysulfide (H2Sx, HSx -, Sx 2-), poly sulfur thiosulfate ( SxO3 2-), and polythionates (SxO6 2-). A comprehensive study of this aspect of copper chemistry has never been reported, and yet an understanding of this issue is vital for assessing whether copper is a suitable material for fabricating canisters for the disposal of HLNW. Our study identifies and explores those species that activate copper; these species include sulfur-containing entities as well as other, non-sulfur species that may be present in the repository. The effects of temperature, solution pH, and hydrogen pressure on the kinetics of the hydrogen electrode reaction (HER) on copper in borate buffer solution have been studied by means of steady-state polarization measurements, including electrochemical impedance spectroscopy (EIS). In order to obtain electrokinetic parameters, such as the exchange current density and the

In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

Templates inserted into surgical wounds strongly influence the healing responses in humans. The science of these templates, in the form of extracellular matrix biomaterials, is rapidly evolving and improving as the natural interactions with the body become better understood. PMID:26961446

The ASSET1.0 software provides a template with which a user can evaluate an Air Sampling System against the latest version of ANSI N13.1 "Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stacks and Ducts of Nuclear Facilities". The software uses the ANSI N13.1 PIC levels to establish basic design criteria for the existing or proposed sampling system. The software looks at such criteria as PIC level, type of radionuclide emissions, physical state ofmore » the radionuclide, nozzle entrance effects, particulate transmission effects, system and component accuracy and precision evaluations, and basic system operations to provide a detailed look at the subsystems of a monitoring and sampling system/program. A GAP evaluation can then be completed which leads to identification of design and operational flaws in the proposed systems. Corrective measures can then be limited to the GAPs.« less

Compartmentalization of self-replicating molecules (templates) in protocells is a necessary step towards the evolution of modern cells. However, coexistence between distinct template types inside a protocell can be achieved only if there is a selective pressure favoring protocells with a mixed template composition. Here we study analytically a group selection model for the coexistence between two template types using the diffusion approximation of population genetics. The model combines competition at the template and protocell levels as well as genetic drift inside protocells. At the steady state, we find a continuous phase transition separating the coexistence and segregation regimes, with the order parameter vanishing linearly with the distance to the critical point. In addition, we derive explicit analytical expressions for the critical steady-state probability density of protocell compositions.

Template driven chemical ligation of fluorogenic probes represents a powerful method for DNA and RNA detection and imaging. Unfortunately, previous techniques have been hampered by requiring chemistry with sluggish kinetics and background side reactions. We have developed fluorescent DNA probes containing quenched fluorophore-tetrazine and methyl-cyclopropene groups that rapidly react by bioorthogonal cycloaddition in the presence of complementary DNA or RNA templates. Ligation increases fluorescence with negligible background signal in the absence of hybridization template. Reaction kinetics depend heavily on template length and linker structure. Using this technique, we demonstrate rapid discrimination between single template mismatches both in buffer and cell media. Fluorogenic bioorthogonal ligations offer a promising route towards the fast and robust fluorescent detection of specific DNA or RNA sequences. PMID:23775794

A novel nanoimprint lithography process using disposable biomass template having gas permeability was investigated. It was found that a disposable biomass template derived from cellulose materials shows an excellent gas permeability and decreases transcriptional defects in conventional templates such as quartz, PMDS, DLC that have no gas permeability. We believe that outgasses from imprinted materials are easily removed through the template. The approach to use a cellulose for template material is suitable as the next generation of clean separation technology. It is expected to be one of the defect-less thermal nanoimprint lithographic technologies. It is also expected that volatile materials and solvent including materials become available that often create defects and peelings in conventional temples that have no gas permeability.

One strategy for reducing the online computational cost of matched-filter searches for gravitational waves is to introduce a compressed basis for the waveform template bank in a grid-based search. In this paper, we propose and investigate several tunable compression schemes for a general template bank. Through offline compression, such schemes are shown to yield faster detection and localization of signals, along with moderately improved sensitivity and accuracy over coarsened banks at the same level of computational cost. This is potentially useful for any search involving template banks, and especially in the analysis of data from future space-based detectors such as eLISA, for which online grid searches are difficult due to the long-duration waveforms and large parameter spaces.

Searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform. The key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible. This paper presents a geometrical method that allows one to cover the corresponding physical parameter space by a set of ellipses, each of them being associated with a given template. After the description of the main characteristics of the algorithm, the method is applied in the field of gravitational wave (GW) data analysis, for the search of damped sine signals. Such waveforms are expected to be produced during the deexcitation phase of black holes—the so-called “ringdown” signals—and are also encountered in some numerically computed supernova signals. First, the number of templatesN computed by the method is similar to its analytical estimation, despite the overlaps between neighbor templates and the border effects. Moreover, N is small enough to test for the first time the performances of the set of templates for different choices of the minimal match MM, the parameter used to define the maximal allowed loss of signal-to-noise ratio (SNR) due to the mismatch between real signals and templates. The main result of this analysis is that the fraction of SNR recovered is on average much higher than MM, which dramatically decreases the mean percentage of false dismissals. Indeed, it goes well below its estimated value of 1-MM3 used as input of the algorithm. Thus, as this feature should be common to any tiling algorithm, it seems possible to reduce the constraint on the value of MM—and indeed the number of templates and the computing power—without losing as many events as expected on

Targeted cancer therapeutics are promised to have a major impact on cancer treatment and survival. Successful application of these novel treatments requires a molecular definition of a patient's disease typically achieved through the use of tissue biopsies. Alternatively, allowing longitudinal monitoring, biomarkers derived from blood, isolated either from circulating tumor cell derived DNA (ctcDNA) or circulating cell-free tumor DNA (ccfDNA) may be evaluated. In order to use blood derived templates for mutational profiling in clinical decisions, it is essential to understand the different template qualities and how they compare to biopsy derived template DNA as both blood-based templates are rare and distinct from the gold-standard. Using a next generation re-sequencing strategy, concordance of the mutational spectrum was evaluated in 32 patient-matched ctcDNA and ccfDNA templates with comparison to tissue biopsy derived DNA template. Different CTC antibody capture systems for DNA isolation from patient blood samples were also compared. Significant overlap was observed between ctcDNA, ccfDNA and tissue derived templates. Interestingly, if the results of ctcDNA and ccfDNA template sequencing were combined, productive samples showed similar detection frequency (56% vs 58%), were temporally flexible, and were complementary both to each other and the gold standard. These observations justify the use of a multiple template approach to the liquid biopsy, where germline, ctcDNA, and ccfDNA templates are employed for clinical diagnostic purposes and open a path to comprehensive blood derived biomarker access. PMID:27049831

Precise lung tumor localization in real time is particularly important for some motion management techniques, such as respiratory gating or beam tracking with a dynamic multi-leaf collimator, due to the reduced clinical tumor volume (CTV) to planning target volume (PTV) margin and/or the escalated dose. There might be large uncertainties in deriving tumor position from external respiratory surrogates. While tracking implanted fiducial markers has sufficient accuracy, this procedure may not be widely accepted due to the risk of pneumothorax. Previously, we have developed a technique to generate gating signals from fluoroscopic images without implanted fiducial markers using a templatematching method (Berbeco et al 2005 Phys. Med. Biol. 50 4481-90, Cui et al 2007 Phys. Med. Biol. 52 741-55). In this paper, we present an extension of this method to multiple-templatematching for directly tracking the lung tumor mass in fluoroscopy video. The basic idea is as follows: (i) during the patient setup session, a pair of orthogonal fluoroscopic image sequences are taken and processed off-line to generate a set of reference templates that correspond to different breathing phases and tumor positions; (ii) during treatment delivery, fluoroscopic images are continuously acquired and processed; (iii) the similarity between each reference template and the processed incoming image is calculated; (iv) the tumor position in the incoming image is then estimated by combining the tumor centroid coordinates in reference templates with proper weights based on the measured similarities. With different handling of image processing and similarity calculation, two such multiple-template tracking techniques have been developed: one based on motion-enhanced templates and Pearson's correlation score while the other based on eigen templates and mean-squared error. The developed techniques have been tested on six sequences of fluoroscopic images from six lung cancer patients against the reference

Skyline-based terrain matching, a new method for locating the vantage point of stereo camera or laser range-finding measurements on a global map previously prepared by satellite or aerial mapping is described. The orientation of the vantage is assumed known, but its translational parameters are determined by the algorithm. Skylines, or occluding contours, can be extracted from the sensory measurements taken by an autonomous vehicle. They can also be modeled from the global map, given a vantage estimate from which to start. The two sets of skylines, represented in cylindrical coordinates about either the true or the estimated vantage, are employed as 'features' or reference objects common to both sources of information. The terrain matching problem is formulated in terms of finding a translation between the respective representations of the skylines, by approximating the two sets of skylines as identical features (curves) on the actual terrain. The search for this translation is based on selecting the longest of the minimum-distance vectors between corresponding curves from the two sets of skylines. In successive iterations of the algorithm, the approximation that the two sets of curves are identical becomes more accurate, and the vantage estimate continues to improve. The algorithm was implemented and evaluated on a simulated terrain. Illustrations and examples are included.

A crucial problem in image analysis is to construct efficient low-level representations of an image, providing precise characterization of features which compose it, such as edges and texture components. An image usually contains very different types of features, which have been successfully modeled by the very redundant family of 2D Gabor oriented wavelets, describing the local properties of the image: localization, scale, preferred orientation, amplitude and phase of the discontinuity. However, this model generates representations of very large size. Instead of decomposing a given image over this whole set of Gabor functions, we use an adaptive algorithm (called matching pursuit) to select the Gabor elements which approximate at best the image, corresponding to the main features of the image. This produces compact representation in terms of few features that reveal the local image properties. Results prove that the elements are precisely localized on the edges of the images, and give a local decomposition as linear combinations of `textons' in the textured regions. We introduce a fast algorithm to compute the matching pursuit decomposition for images with a complexity of (Omicron) (N log2 N) per iteration for an image of N2 pixels.

An efficient view invariant framework for the recognition of human activities from an input video sequence is presented. The proposed framework is composed of three consecutive modules: (i) detect and locate people by background subtraction, (ii) view invariant spatiotemporal template creation for different activities, (iii) and finally, templatematching is performed for view invariant activity recognition. The foreground objects present in a scene are extracted using change detection and background modeling. The view invariant templates are constructed using the motion history images and object shape information for different human activities in a video sequence. For matching the spatiotemporal templates for various activities, the moment invariants and Mahalanobis distance are used. The proposed approach is tested successfully on our own viewpoint dataset, KTH action recognition dataset, i3DPost multiview dataset, MSR viewpoint action dataset, VideoWeb multiview dataset, and WVU multiview human action recognition dataset. From the experimental results and analysis over the chosen datasets, it is observed that the proposed framework is robust, flexible, and efficient with respect to multiple views activity recognition, scale, and phase variations.

Background and purpose Massive bone allografts are used when surgery causes large segmental defects. Shape-matching is the primary criterion for selection of an allograft. The current selection method, based on 2-dimensional template comparison, is inefficient for 3-dimensional complex bones. We have analyzed a 3-dimensional (3-D) registration method to match the anatomy of the allograft with that of the recipient. Methods 3-D CT-based registration was performed to match the shapes of both bones. We used the registration to align the allograft volume onto the recipient's bone. Hemipelvic allograft selection was tested in 10 virtual recipients with a panel of 10 potential allografts, including one from the recipient himself (trap graft). 4 observers were asked to visually inspect the superposition of allograft over the recipient, to classify the allografts into 4 categories according to the matching of anatomic zones, and to select the 3 best matching allografts. The results obtained using the registration method were compared with those from a previous study on the template method. Results Using the registration method, the observers systematically detected the trap graft. Selections of the 3 best matching allografts performed using registration and template methods were different. Selection of the 3 best matching allografts was improved by the registration method. Finally, reproducibility of the selection was improved when using the registration method. Interpretation 3-D CT registration provides more useful information than the template method but the final decision lies with the surgeon, who should select the optimal allograft according to his or her own preferences and the needs of the recipient. PMID:20175643

We studied how learning changes the processing of a low-level Gabor stimulus, using a classification-image method (psychophysical reverse correlation) and a task where observers discriminated between slight differences in the phase (relative alignment) of a target Gabor in visual noise. The method estimates the internal "template" that describes how the visual system weights the input information for decisions. One popular idea has been that learning makes the template more like an ideal Bayesian weighting; however, the evidence has been indirect. We used a new regression technique to directly estimate the template weight change and to test whether the direction of reweighting is significantly different from an optimal learning strategy. The subjects trained the task for six daily sessions, and we tested the transfer of training to a target in an orthogonal orientation. Strong learning and partial transfer were observed. We tested whether task precision (difficulty) had an effect on template change and transfer: Observers trained in either a high-precision (small, 60° phase difference) or a low-precision task (180°). Task precision did not have an effect on the amount of template change or transfer, suggesting that task precision per se does not determine whether learning generalizes. Classification images show that training made observers use more task-relevant features and unlearn some irrelevant features. The transfer templates resembled partially optimized versions of templates in training sessions. The template change direction resembles ideal learning significantly but not completely. The amount of template change was highly correlated with the amount of learning. PMID:27559720

Gravitational waves from coalescing compact binaries are one of the most promising sources for detectors such as LIGO, Virgo, and GEO600. If the components of the binary possess significant angular momentum (spin), as is likely to be the case if one component is a black hole, spin-induced precession of a binary’s orbital plane causes modulation of the gravitational-wave amplitude and phase. If the templates used in a matched-filter search do not accurately model these effects then the sensitivity, and hence the detection rate, will be reduced. We investigate the ability of several search pipelines to detect gravitational waves from compact binaries with spin. We use the post-Newtonian approximation to model the inspiral phase of the signal and construct two new template banks using the phenomenological waveforms of Buonanno, Chen, and Vallisneri [A. Buonanno, Y. Chen, and M. Vallisneri, Phys. Rev. DPRVDAQ0556-2821 67, 104025 (2003)10.1103/PhysRevD.67.104025]. We compare the performance of these template banks to that of banks constructed using the stationary phase approximation to the nonspinning post-Newtonian inspiral waveform currently used by LIGO and Virgo in the search for compact binary coalescence. We find that, at the same false alarm rate, a search pipeline using phenomenological templates is no more effective than a pipeline which uses nonspinning templates. We recommend the continued use of the nonspinning stationary phase template bank until the false alarm rate associated with templates which include spin effects can be substantially reduced.

Cylindrical objects made usually of fired clay but sometimes of stone were found at the Yarmukian Pottery Neolithic sites of Sha‘ar HaGolan and Munhata (first half of the 8th millennium BP) in the Jordan Valley. Similar objects have been reported from other Near Eastern Pottery Neolithic sites. Most scholars have interpreted them as cultic objects in the shape of phalli, while others have referred to them in more general terms as “clay pestles,” “clay rods,” and “cylindrical clay objects.” Re-examination of these artifacts leads us to present a new interpretation of their function and to suggest a reconstruction of their technology and mode of use. We suggest that these objects were components of fire drills and consider them the earliest evidence of a complex technology of fire ignition, which incorporates the cylindrical objects in the role of matches. PMID:22870306

Cylindrical objects made usually of fired clay but sometimes of stone were found at the Yarmukian Pottery Neolithic sites of Sha'ar HaGolan and Munhata (first half of the 8(th) millennium BP) in the Jordan Valley. Similar objects have been reported from other Near Eastern Pottery Neolithic sites. Most scholars have interpreted them as cultic objects in the shape of phalli, while others have referred to them in more general terms as "clay pestles," "clay rods," and "cylindrical clay objects." Re-examination of these artifacts leads us to present a new interpretation of their function and to suggest a reconstruction of their technology and mode of use. We suggest that these objects were components of fire drills and consider them the earliest evidence of a complex technology of fire ignition, which incorporates the cylindrical objects in the role of matches. PMID:22870306

In this dissertation, the Perfectly Matched Multiscale Simulations (PMMS), a method of discrete-to-continuum multiscale scale computation is studied, revised and extended. In particular, the role of the Perfectly Matched Layer (PML) in PMMS is carefully studied. We show that instead of following the PML theory of continuum, the PML equations of motion in PMMS can be derived by stretching the inter-atomic equilibrium distance. As a result, the displacement solution in the PML region has the desired spatial damping property. It is also shown that the dispersion relationship in the PML region is different from the one in the original lattice. And a reflection coefficient is computed. We also incorporate the local Quasicontinuum (QC) theory with the cohesive Finite Element (FE) method to form a cohesive QC scheme which can deal with arbitrary discontinuities. This idea is built into the PMMS method to simulate a moving screw dislocation. The second part of the dissertation is to extend PMMS to finite temperature. A multiscale thermodynamics is proposed based on the idea of distributed coarse scale thermostats. Each coarse scale node is viewed as a thermostat and has part of atoms associated with it. The atomic motion at the fine scale level is governed by the Nose-Hoover dynamics. At the coarse scale, the expression of a coarse-grained Helmholtz free energy is derived and coupled thermo-mechanical equations are formulated based on it. With the proposed framework, the finite-temperature PMMS method is capable of simulating problems with drastic temperature change. Several numerical examples are computed to validate the method.

Stereo correspondence is hard because different image features can look alike. We propose a measure for the ambiguity of image points that allows matching distinctive points first and breaks down the matching task into smaller and separate subproblems. Experiments with an algorithm based on this measure demonstrate the ensuing efficiency and low likelihood of incorrect matches.

We discuss the temporal efficiency of template-directed polymer synthesis, such as DNA replication and transcription, under a given template string. To weigh the synthesis speed and accuracy on the same scale, we propose a template-directed synthesis (TDS) rate, which contains an expression analogous to that for the Shannon entropy. Increasing the synthesis speed accelerates the TDS rate, but the TDS rate is lowered if the produced sequences are diversified. We apply the TDS rate to some production system models and investigate how the balance between the speed and the accuracy is affected by changes in the system conditions.

This work demonstrates the fabrication of partially mineralized scaffolds fabricated in 3D shapes using paper by folding, and by supporting deposition of calcium phosphate by osteoblasts cultured in these scaffolds. This process generates centimeter-scale free-standing structures composed of paper supporting regions of calcium phosphate deposited by osteoblasts. This work is the first demonstration that paper can be used as a scaffold to induce template-guided mineralization by osteoblasts. Because paper has a porous structure, it allows transport of O2 and nutrients across its entire thickness. Paper supports a uniform distribution of cells upon seeding in hydrogel matrices, and allows growth, remodelling, and proliferation of cells. Scaffolds made of paper make it possible to construct 3D tissue models easily by tuning material properties such as thickness, porosity, and density of chemical functional groups. Paper offers a new approach to study mechanisms of biomineralization, and perhaps ultimately new techniques to guide or accelerate the repair of bone. PMID:27277575

This work demonstrates the fabrication of partially mineralized scaffolds fabricated in 3D shapes using paper by folding, and by supporting deposition of calcium phosphate by osteoblasts cultured in these scaffolds. This process generates centimeter-scale free-standing structures composed of paper supporting regions of calcium phosphate deposited by osteoblasts. This work is the first demonstration that paper can be used as a scaffold to induce template-guided mineralization by osteoblasts. Because paper has a porous structure, it allows transport of O2 and nutrients across its entire thickness. Paper supports a uniform distribution of cells upon seeding in hydrogel matrices, and allows growth, remodelling, and proliferation of cells. Scaffolds made of paper make it possible to construct 3D tissue models easily by tuning material properties such as thickness, porosity, and density of chemical functional groups. Paper offers a new approach to study mechanisms of biomineralization, and perhaps ultimately new techniques to guide or accelerate the repair of bone. PMID:27277575

One aspect of the present invention relates to a method of preparing a fibrous protein smectic hydrogel by way of a solvent templating process, comprising the steps of pouring an aqueous fibrous protein solution into a container comprising a solvent that is not miscible with water; sealing the container and allowing it to age at about room temperature; and collecting the resulting fibrous protein smectic hydrogel and allowing it to dry. Another aspect of the present invention relates to a method of obtaining predominantly one enantiomer from a racemic mixture, comprising the steps of pouring an aqueous fibrous protein solution into a container comprising a solvent that is not miscible with water; sealing the container and allowing it to age at about room temperature; allowing the enantiomers of racemic mixture to diffuse selectively into the smectic hydrogel in solution; removing the smectic hydrogel from the solution; rinsing predominantly one enantiomer from the surface of the smectic hydrogel; and extracting predominantly one enantiomer from the interior of the smectic hydrogel. The present invention also relates to a smectic hydrogel prepared according to an aforementioned method.

One aspect of the present invention relates to a method of preparing a fibrous protein smectic hydrogel by way of a solvent templating process, comprising the steps of pouring an aqueous fibrous protein solution into a container comprising a solvent that is not miscible with water; sealing the container and allowing it to age at about room temperature; and collecting the resulting fibrous protein smectic hydrogel and allowing it to dry. Another aspect of the present invention relates to a method of obtaining predominantly one enantiomer from a racemic mixture, comprising the steps of pouring an aqueous fibrous protein solution into a container comprising a solvent that is not miscible with water; sealing the container and allowing it to age at about room temperature; allowing the enantiomers of racemic mixture to diffuse selectively into the smectic hydrogel in solution; removing the smectic hydrogel from the solution; rinsing predominantly one enantiomer from the surface of the smectic hydrogel; and extracting predominantly one enantiomer from the interior of the smectic hydrogel. The present invention also relates to a smectic hydrogel prepared according to an aforementioned method.

One aspect of the present invention relates to a method of preparing a fibrous protein smectic hydrogel by way of a solvent templating process, comprising the steps of pouring an aqueous fibrous protein solution into a container comprising a solvent that is not miscible with water; sealing the container and allowing it to age at about room temperature; and collecting the resulting fibrous protein smectic hydrogel and allowing it to dry. Another aspect of the present invention relates to a method of obtaining predominantly one enantiomer from a racemic mixture, comprising the steps of pouring an aqueous fibrous protein solution into a container comprising a solvent that is not miscible with water; sealing the container and allowing it to age at about room temperature; allowing the enantiomers of racemic mixture to diffuse selectively into the smectic hydrogel in solution; removing the smectic hydrogel from the solution; rinsing predominantly one enantiomer from the surface of the smectic hydrogel; and extracting predominantly one enantiomer from the interior of the smectic hydrogel. The present invention also relates to a smectic hydrogel prepared according to an aforementioned method.

Configurable systems offer increased performance by providing hardware that matches the computational structure of a problem. This hardware is currently programmed with CAD tools and explicit library calls. To attain widespread acceptance, configurable computing must become transparently accessible from high-level programming languages, but the changeable nature of the target hardware presents a major challenge to traditional compiler technology. A compiler for a configurable computer should optimize the use of functions embedded in hardware and schedule hardware reconfigurations. The hurdles to be overcome in achieving this capability are similar in some ways to those facing compilation for heterogeneous systems. For example, current traditional compilers have neither an interface to accept new primitive operators, nor a mechanism for applying optimizations to new operators. We are building a compiler for heterogeneous computing, called Scale, which replaces the traditional monolithic compiler architecture with a flexible framework. Scale has three main parts: translation director, compilation library, and a persistent store which holds our intermediate representation as well as other data structures. The translation director exploits the framework's flexibility by using architectural information to build a plan to direct each compilation. The translation library serves as a toolkit for use by the translation director. Our compiler intermediate representation, Score, facilities the addition of new IR nodes by distinguishing features used in defining nodes from properties on which transformations depend. In this paper, we present an overview of the scale architecture and its capabilities for dealing with heterogeneity, followed by a discussion of how those capabilities apply to problems in configurable computing. We then address aspects of configurable computing that are likely to require extensions to our approach and propose some extensions.

Deep Lake in Antarctica is a globally isolated, hypersaline system that remains liquid at temperatures down to -20 °C. By analyzing metagenome data and genomes of four isolates we assessed genome variation and patterns of gene exchange to learn how the lake community evolved. The lake is completely and uniformly dominated by haloarchaea, comprising a hierarchically structured, low-complexity community that differs greatly to temperate and tropical hypersaline environments. The four Deep Lake isolates represent distinct genera (∼85% 16S rRNA gene similarity and ∼73% genome average nucleotide identity) with genomic characteristics indicative of niche adaptation, and collectively account for ∼72% of the cellular community. Network analysis revealed a remarkable level of intergenera gene exchange, including the sharing of long contiguous regions (up to 35 kb) of high identity (∼100%). Although the genomes of closely related Halobacterium, Haloquadratum, and Haloarcula (>90% average nucleotide identity) shared regions of high identity between species or strains, the four Deep Lake isolates were the only distantly related haloarchaea to share long high-identity regions. Moreover, the Deep Lake high-identity regions did not match to any other hypersaline environment metagenome data. The most abundant species, tADL, appears to play a central role in the exchange of insertion sequences, but not the exchange of high-identity regions. The genomic characteristics of the four haloarchaea are consistent with a lake ecosystem that sustains a highlevel of intergenera gene exchange while selecting for ecotypes that maintain sympatric speciation. The peculiarities of this polar system restrict which species can grow and provide a tempo and mode for accentuating gene exchange. PMID:24082106

The Mikulski Archive for Space Telescopes (MAST) is a NASA-funded archive for a wide range of astronomical missions, primarily supporting space-based UV and optical telescopes. What is less well-known is that MAST provides much more than just a final resting place for primary data products and documentation from these missions. The MAST Discovery Portal is our new search interface that integrates all the missions that MAST supports into a single interface, allowing users to discover (and retrieve) data from other missions that overlap with your targets of interest. In addition to searching MAST, the Portal allows users to search the Virtual Observatory, granting access to data from thousands of collections registered with the VO, including large missions spanning the electromagnetic spectrum (e.g., Chandra, SDSS, Spitzer, 2MASS, WISE). The Portal features table import/export, coordinate-based cross-matching, dynamic chart plotting, and the AstroView sky viewer with footprint overlays. We highlight some of these capabilities with science-driven examples. MAST also accepts HighLevel Science Products (HLSPs) from the community. These HLSPs are user-generated data products that can be related to a MAST-supported mission. MAST provides a permanent archive for these data with linked references, and integrates it within MAST infrastructure and services. We highlight some of the most recent HLSPs MAST has released, including the HST Frontier Fields, GALEX All-Sky Diffuse Radiation Mapping, a survey of the intergalactic medium with HST-COS, and one of the most complete line lists ever derived for a white dwarf using FUSE AND HST-STIS. These HLSPs generate substantial interest from the community, and are an excellent way to increase visibility and ensure the longevity of your data.

The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.

Accurate automated alignment of laser beams in the National Ignition Facility (NIF) is essential for achieving extreme temperature and pressure required for inertial confinement fusion. The alignment achieved by the integrated control systems relies on algorithms processing video images to determine the position of the laser beam images in real-time. Alignment images that exhibit wide variations in beam quality require a matched-filter algorithm for position detection. One challenge in designing a matched-filter based algorithm is to construct a filter template that is resilient to variations in imaging conditions while guaranteeing accurate position determination. A second challenge is to process the image as fast as possible. This paper describes the development of a new analytical template that captures key recurring features present in the beam image to accurately estimate the beam position under good image quality conditions. Depending on the features present in a particular beam, the analytical template allows us to create a highly tailored template containing only those selected features. The second objective is achieved by exploiting the parallelism inherent in the algorithm to accelerate processing using parallel hardware that provides significant performance improvement over conventional processors. In particular, a Xilinx Virtex II Pro FPGA hardware implementation processing 32 templates provided a speed increase of about 253 times over an optimized software implementation running on a 2.0 GHz AMD Opteron core.

Despite the success of wavelet decompositions in other areas of statistical signal and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown translation and rotation. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR (Template Learning from Atomic Representations), a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. We discuss several applications, including template learning, pattern classification, and image registration.

Amesos2 is a templated direct sparse solver package. Amesos2 provides interfaces to direct sparse solvers, rather than providing native solver capabilities. Amesos2 is a derivative work of the Trilinos package Amesos.

A lipid bilayer on a nano-template comprising a nanotube or nanowire and a lipid bilayer around the nanotube or nanowire. One embodiment provides a method of fabricating a lipid bilayer on a nano-template comprising the steps of providing a nanotube or nanowire and forming a lipid bilayer around the polymer cushion. One embodiment provides a protein pore in the lipid bilayer. In one embodiment the protein pore is sensitive to specific agents

Quantum image processing (QIP) means the quantum-based methods to speed up image processing algorithms. Many quantum image processing schemes claim that their efficiency is theoretically higher than their corresponding classical schemes. However, most of them do not consider the problem of measurement. As we all know, measurement will lead to collapse. That is to say, executing the algorithm once, users can only measure the final state one time. Therefore, if users want to regain the results (the processed images), they must execute the algorithms many times and then measure the final state many times to get all the pixels' values. If the measurement process is taken into account, whether or not the algorithms are really efficient needs to be reconsidered. In this paper, we try to solve the problem of measurement and give a quantum image matching algorithm. Unlike most of the QIP algorithms, our scheme interests only one pixel (the target pixel) instead of the whole image. It modifies the probability of pixels based on Grover's algorithm to make the target pixel to be measured with higher probability, and the measurement step is executed only once. An example is given to explain the algorithm more vividly. Complexity analysis indicates that the quantum scheme's complexity is O(2n) in contradistinction to the classical scheme's complexity O(2^{2n+2m}) , where m and n are integers related to the size of images.

This paper defines the best practices for documenting ocean acidification (OA) data and presents a framework for an OA metadata template. Metadata is structured information that describes and locates an information resource. It is the key to ensuring that a data set will be accessible into the future. With the rapid expansion of studies on biological responses to OA, the lack of a common metadata template to document the resulting data poses a significant hindrance to effective OA data management efforts. In this paper, we present a metadata template that can be applied to a broad spectrum of OA studies, including those studying the biological responses to OA. The "variable metadata section", which includes the variable name, observation type, whether the variable is a manipulation condition or response variable, and the biological subject on which the variable is studied, forms the core of this metadata template. Additional metadata elements, such as investigators, temporal and spatial coverage, and data citation, are essential components to complete the template. We explain the structure of the template, and define many metadata elements that may be unfamiliar to researchers.

The template principle has originated from the chromosome theory of inheritance and claims to be the universal paradigm of modern biology. It considers the mechanisms of inheritance and different types of variability from a unified standpoint. The type I template processes (TP I) operate with linear templates: DNA and RNA. TP II deal with spatial, or conformational, templates of protein nature. TP II are based on variation and reproduction of the spatial structure of proteins and do not affect their primary structure. They are involved in many pathological and adaptive processes in living systems. The universal properties of TP I, ambiguity and repair (correction), are common to all three stages of each template process-initiation, elongation, and termination. These properties are typical for TP II as well. Ambiguity and correction at stages of initiation and termination of TP are prerequisites for the regulation of template processes. The variation in this regulation underlies the complexity and progressive evolution of living-systems. PMID:26087617

Metadata is structured information that describes, explains, and locates an information resource (e.g., data). It is often coarsely described as data about data, and documents information such as what was measured, by whom, when, where, and how it was sampled, analyzed, with what instruments. Metadata is inherent to ensure the survivability and accessibility of the data into the future. With the rapid expansion of biological response ocean acidification (OA) studies, the lack of a common metadata template to document such type of data has become a significant gap for ocean acidification data management efforts. In this paper, we present a metadata template that can be applied to a broad spectrum of OA studies, including those studying the biological responses of organisms on ocean acidification. The "variable metadata section", which includes the variable name, observation type, whether the variable is a manipulation condition or response variable, and the biological subject on which the variable is studied, forms the core of this metadata template. Additional metadata elements, such as principal investigators, temporal and spatial coverage, platforms for the sampling, data citation are essential components to complete the template. We explain the structure of the template, and define many metadata elements that may be unfamiliar to researchers. For that reason, this paper can serve as a user's manual for the template.

A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, templatematching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

An interesting problem that has concerned forensic scientist for many years, is their need for accurate, reliable and objective methods for performing fracture matching examinations. The aim of these fracture matching methods is to determine if two broken object halves can be matched together, e.g., when one half is recovered at a crime scene, while the other half is found in the possession of a suspect. In this paper we discuss the use of a commercial white-light profilometer system for obtaining 2D/3D image surface scans of multiple fractured objects. More specifically, we explain the use of this system for digitizing the fracture surface of multiple facing halves of several snap-off blade knives. Next, we discuss the realization and evaluation of several image processing methods for trying to match the obtained image scans corresponding to each of the broken off blade elements used in our experiments. The algorithms that were tested and evaluated include: global templatematching based on image correlation and multiple templatematching based on local image correlation, using so-called "vote-map" computation. Although many avenues for further research still remain possible, we show that the second method yields very good results for allowing automated searching and matching of the imaged fracture surfaces for each of the examined blade elements.

Funding used to support a portion of the Nuclear Engineering Educational Activities. Upgrade of teaching labs, student support to attend professional conferences, salary support for graduate students. The US Department of Energy (DOE) has funded Purdue University School of Nuclear Engineering during the period of five academic years covered in this report starting in the academic year 1996-97 and ending in the academic year 2000-2001. The total amount of funding for the grant received from DOE is $416K. In the 1990's, Nuclear Engineering Education in the US experienced a significant slow down. Student enrollment, research support, number of degrees at all levels (BS, MS, and PhD), number of accredited programs, University Research and Training Reactors, all went through a decline to alarmingly low levels. Several departments closed down, while some were amalgamated with other academic units (Mechanical Engineering, Chemical Engineering, etc). The School of Nuclear Engineering at Purdue University faced a major challenge when in the mid 90's our total undergraduate enrollment for the Sophomore, Junior and Senior Years dropped in the low 30's. The DOE Matching Grant program greatly strengthened Purdue's commitment to the Nuclear Engineering discipline and has helped to dramatically improve our undergraduate and graduate enrollment, attract new faculty and raise the School of Nuclear Engineering status within the University and in the National scene (our undergraduate enrollment has actually tripled and stands at an all time high of over 90 students; total enrollment currently exceeds 110 students). In this final technical report we outline and summarize how the grant was expended at Purdue University.

In cell extracts of Xenopus eggs which oscillate between S and M phases of the cell cycle, the onset of mitosis is blocked by the presence of incompletely replicated DNA. In this report, we show that several artificial DNA templates (M13 single-stranded DNA and double-stranded plasmid DNA) can trigger this feedback pathway, which inhibits mitosis. Single-stranded M13 DNA is much more effective than double-stranded plasmid DNA at inhibiting the onset of mitosis. Furthermore, we have shown that low levels of M13 single-stranded DNA and highlevels of double-stranded plasmid DNA can elevate the tyrosine kinase activity responsible for phosphorylating p34cdc2, thereby inactivating maturation-promoting factor and inhibiting entry into mitosis. This constitutes a simplified system with which to study the signal transduction pathway from the DNA template to the tyrosine kinase responsible for inhibiting p34cdc2 activity. Images PMID:1320197

Nuclear and commercial non-nuclear technologies that have the potential of meeting the environmental restoration, decontamination and decommissioning, and high-level waste management objectives are being assessed and evaluated. A detailed comparison of innovative technologies available will be performed to determine the safest and most economical technology for meeting these objectives. Information derived from this effort will be matched with the multi-objectives of the environmental restoration, decontamination and decommissioning, and high-level waste management effort to ensure that the best, most economical, and the safest technologies are used in decision making at USDOE-SRS. Technology-related variables will be developed and the resulting data formatted and computerized for multimedia systems. The multimedia system will be made available to technology developers and evaluators to ensure that the best, most economical, and the safest technologies are used in decision making at USDOE-SRS. Technology-related variables will be developed and the resulting data formatted and computerized for multimedia systems. The multimedia system will be made available to technology developers and evaluators to ensure that the safest and most economical technologies are developed for use at SRS and other DOE sites.

In 2013, the Integrating the Healthcare Enterprise (IHE) Radiology workgroup developed the Management of Radiology Report Templates (MRRT) profile, which defines both the format of radiology reporting templates using an extension of Hypertext Markup Language version 5 (HTML5), and the transportation mechanism to query, retrieve, and store these templates. Of 200 English-language report templates published by the Radiological Society of North America (RSNA), initially encoded as text and in an XML schema language, 168 have been converted successfully into MRRT using a combination of automated processes and manual editing; conversion of the remaining 32 templates is in progress. The automated conversion process applied Extensible Stylesheet Language Transformation (XSLT) scripts, an XML parsing engine, and a Java servlet. The templates were validated for proper HTML5 and MRRT syntax using web-based services. The MRRT templates allow radiologists to share best-practice templates across organizations and have been uploaded to the template library to supersede the prior XML-format templates. By using MRRT transactions and MRRT-format templates, radiologists will be able to directly import and apply templates from the RSNA Report Template Library in their own MRRT-compatible vendor systems. The availability of MRRT-format reporting templates will stimulate adoption of the MRRT standard and is expected to advance the sharing and use of templates to improve the quality of radiology reports. PMID:25776768

This paper defines the best practices for documenting ocean acidification (OA) metadata and presents a framework for an OA metadata template. Metadata is structured information that describes and locates an information resource. It is the key to ensuring that a data set will survive and continue to be accessible into the future. With the rapid expansion of studies on biological responses of organisms to OA, the lack of a common metadata template to document the resulting data poses a significant hindrance to effective OA data management efforts. In this paper, we present a metadata template that can be applied to a broad spectrum of OA studies, including those studying the biological responses of organisms to OA. The "variable metadata section", which includes the variable name, observation type, whether the variable is a manipulation condition or response variable, and the biological subject on which the variable is studied, forms the core of this metadata template. Additional metadata elements, such as investigators, temporal and spatial coverage, platforms for the sampling, data citation, are essential components to complete the template. We also explain the structure of the template, and define many metadata elements that may be unfamiliar to researchers. Template availability. - Available at: http://ezid.cdlib.org/id/doi:10.7289/V5C24TCK. - DOI: doi:10.7289/V5C24TCK. - NOAA Institutional Repository Accession number: ocn881471371.

A highlevel language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested highlevel language is described.

This paper reports on the implementation of a highlevel contextualized mathematics curriculum by 12 adult basic instructors in a midwestern state. The 10-week pilot curriculum embedded highlevel mathematics in contexts that were familiar to adult learners. Instructors' weekly online posts were coded, and the following themes emerged: (a)…

Few adult second language (L2) learners successfully attain high-level proficiency. Although decades of research on beginning to intermediate stages of L2 learning have identified a number of predictors of the rate of acquisition, little research has examined factors relevant to predicting very highlevels of L2 proficiency. The current study,…

Information concerning the condition of the high-level waste tanks at the Western New York State Nuclear Service center near West Valley, New York is presented. This information is to be used in evaluating the safety of continued storage and in the development of alternatives for final disposition of the high-level waste.

The High-Level Waste Data Base is a menu-driven PC data base developed as part of OCRWM's technical data base on the characteristics of potential repository wastes, which also includes spent fuel and other materials. This programmer's guide completes the documentation for the High-Level Waste Data Base, the user's guide having been published previously. 3 figs.

The concept of an RNA world in the chemical origin of life is appealing, as nucleic acids are capable of both information storage and acting as templates that catalyse the synthesis of complementary molecules. Template-directed synthesis has been demonstrated for homogeneous oligonucleotides that, like natural nucleic acids, have 3',5' linkages between the nucleotide monomers. But it seems likely that prebiotic routes to RNA-like molecules would have produced heterogeneous molecules with various kinds of phosphodiester linkages and both linear and cyclic nucleotide chains. Here we show that such heterogeneity need be no obstacle to the templating of complementary molecules. Specifically, we show that heterogeneous oligocytidylates, formed by the montmorillonite clay-catalysed condensation of actuated monomers, can serve as templates for the synthesis of oligoguanylates. Furthermore, we show that oligocytidylates that are exclusively 2',5'-linked can also direct synthesis of oligoguanylates. Such heterogeneous templating reactions could have increased the diversity of the pool of protonucleic acids from which life ultimately emerged.

Gravitational radiation from a slightly distorted black hole with ringdown waveform is well understood in general relativity. It provides a probe for direct observation of black holes and determination of their physical parameters, masses and angular momenta (Kerr parameters). For ringdown searches using data of gravitational wave detectors, matched filtering technique is useful. In this paper, we describe studies on problems in matched filtering analysis in realistic gravitational wave searches using observational data. Above all, we focus on template constructions, matches or signal-to-noise ratios (SNRs), detection probabilities for Galactic events, and accuracies in evaluation of waveform parameters or black hole hairs. In template design for matched filtering, search parameter ranges and template separations are determined by requirements from acceptable maximum loss of SNRs, detection efficiencies, and computational costs. In realistic searches using observational data, however, effects of nonstationary noises cause decreases of SNRs, and increases of errors in waveform parameter determinations. These problems will potentially arise in any matched filtering searches for any kind of waveforms. To investigate them, we have performed matched filtering analysis for artificial ringdown signals which are generated with Monte-Carlo technique and injected into the TAMA300 observational data. We employed an efficient method to construct a bank of ringdown filters recently proposed by Nakano et al., and use a template bank generated from a criterion such that losses of SNRs of any signals do not exceed 2%. We found that this criterion is fulfilled in ringdown searches using TAMA300 data, by examining distribution of SNRs of simulated signals. It is also shown that with TAMA300 sensitivity, the detection probability for Galactic ringdown events is about 50% for black holes of masses greater than 20M{sub {center_dot}} with SNR>10. The accuracies in waveform parameter

It is necessary to match food consumption data with food composition data in order to calculate estimates of nutrient intakes and dietary exposure. This can be done manually or through an automated system. As food matching procedures are key to obtaining high quality estimations of nutrient intake...

were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and highlevel visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026

Document aesthetics measures are key to automated document composition. Recently we presented a probabilistic document model (PDM) which is a micro-model for document aesthetics based on a probabilistic modeling of designer choice in document design. The PDM model comes with efficient layout synthesis algorithms once the aesthetic model is defined. A key element of this approach is an aesthetic prior on the parameters of a template encoding aesthetic preferences for template parameters. Parameters of the prior were required to be chosen empirically by designers. In this work we show how probabilistic template models (and hence the PDM cost function) can be learnt directly by observing a designer making design choices in composing sample documents. From such training data our learning approach can learn a quality measure that can mimic some of the design tradeoffs a designer makes in practice.

This paper describe the relationship between C++ templates and partial evaluation. In C++, templates were designed to support generic programming, but not deliberately provided the ability to perform compile-time computations and code generation. These features are completely deliberate, and as a result their syntax is ill at ease. After a review, these features in terms of partial evaluation, a much simpler syntax can be achieved. In C++, it may be regarded as a two-level language in which types are first-class values. Template instantiation resembles an offline partial assessor. In this paper, we explain groundwork in the direction of a single mechanism based on Partial Evaluation which unifies generic programming, compile-time computation and code generation. The language Catat is introduced to demonstrate these ideas.

DNA-templated polyaniline nanowires and networks are synthesized using three different methods. The resulting DNA/polyaniline hybrids are fully characterized using atomic force microscopy, UV-vis spectroscopy and current-voltage measurements. Oxidative polymerization of polyaniline at moderate pH values is accomplished using ammonium persulfate as an oxidant, or alternatively in an enzymatic oxidation by hydrogen peroxide using horseradish peroxidase, or by photo-oxidation using a ruthenium complex as photo-oxidant. Atomic force microscopy shows that all three methods lead to the preferential growth of polyaniline along DNA templates. With ammonium persulfate, polyaniline can be grown on DNA templates already immobilized on a surface. Current-voltage measurements are successfully conducted on DNA/polyaniline networks synthesized by the enzymatic method and the photo-oxidation method. The conductance is found to be consistent with values measured for undoped polyaniline films.

Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach. PMID:24982977

When dealing with pairwise comparisons of stimuli in two fixed observation areas (e.g., one stimulus on the left, one on the right), we say that the stimulus space is regular well-matched if (1) every stimulus is matched by some stimulus in another observation area, and this matching stimulus is determined uniquely up to matching equivalence (two stimuli being equivalent if they always match or do not match any stimulus together); and (2) if a stimulus is matched by another stimulus then it matches it. The regular well-matchedness property has non-trivial consequences for several issues, ranging from the ancient "sorites" paradox to "probability-distance hypothesis" to modeling of discrimination probabilities by means of Thurstonian-type models. We have tested the regular well-matchedness hypothesis for locations of two dots within two side-by-side circles, and for two side-by-side "flower-like" shapes obtained by superposition of two cosine waves with fixed frequencies in polar coordinates. In the location experiment the two coordinates of the dot in one circle were adjusted to match the location of the dot in another circle. In the shape experiment the two cosine amplitudes of one shape were adjusted to match the other shape. The adjustments on the left and on the right alternated in long series according to the "ping-pong" matching scheme developed in Dzhafarov (2006b, J. Math. Psychol., 50, 74-93). The results have been found to be in a good agreement with the regular well-matchedness hypothesis. PMID:21833195

We propose a method for non-rigid face alignment which only needs a single template, such as using a person’s smile face to match his surprise face. First, in order to be robust to outliers caused by complex geometric deformations, a new local feature matching method called K Patch Pairs (K-PP) is proposed. Specifically, inspired by the state-of-art similarity measure used in templatematching, K-PP is to find the mutual K nearest neighbors between two images. A weight matrix is then presented to balance the similarity and the number of local matching. Second, we proposed a modified Lucas-Kanade algorithm combined with local matching constraint to solve the non-rigid face alignment, so that a holistic face representation and local features can be jointly modeled in the object function. Both the flexible ability of local matching and the robust ability of holistic fitting are included in our method. Furthermore, we show that the optimization problem can be efficiently solved by the inverse compositional algorithm. Comparison results with conventional methods demonstrate our superiority in terms of both accuracy and robustness. PMID:27494319

Here we report a simple and scalable colloidal lithography technology for fabricating periodic arrays of gold nanodonuts for sensitive surface plasmon resonance (SPR) analysis. This new bottom-up approach leverages a unique polymer wetting layer between a self-assembled, non-close-packed monolayer silica colloidal crystal and a silicon substrate to template ordered gold nanodonuts with tunable geometries over wafer-sized areas. The processes involved in this templating nanofabrication approach, including spin coating, oxygen plasma etching, and metal sputtering, are all compatible with standard microfabrication technologies. Specular reflection measurements reveal that the efficient electromagnetic coupling of the incident light with the tunable SPR modes of the templated gold nanodonut arrays enables good spectral tunability. Bulk refractive index sensing experiments show that a high SPR sensitivity of ∼758 nm per refractive index unit, which outperforms many plasmonic nanostructures fabricated by both top-down and bottom-up approaches, can be achieved using the templated gold nanodonut arrays. Numerical finite-difference time-domain simulations have also been performed to complement the optical characterization and the theoretical results match well with the experimental measurements.

Here we report a simple and scalable colloidal lithography technology for fabricating periodic arrays of gold nanodonuts for sensitive surface plasmon resonance (SPR) analysis. This new bottom-up approach leverages a unique polymer wetting layer between a self-assembled, non-close-packed monolayer silica colloidal crystal and a silicon substrate to template ordered gold nanodonuts with tunable geometries over wafer-sized areas. The processes involved in this templating nanofabrication approach, including spin coating, oxygen plasma etching, and metal sputtering, are all compatible with standard microfabrication technologies. Specular reflection measurements reveal that the efficient electromagnetic coupling of the incident light with the tunable SPR modes of the templated gold nanodonut arrays enables good spectral tunability. Bulk refractive index sensing experiments show that a high SPR sensitivity of ∼758 nm per refractive index unit, which outperforms many plasmonic nanostructures fabricated by both top-down and bottom-up approaches, can be achieved using the templated gold nanodonut arrays. Numerical finite-difference time-domain simulations have also been performed to complement the optical characterization and the theoretical results match well with the experimental measurements. PMID:27040938

Gravitational waves from coalescing stellar-mass black hole binaries (BBHs) are expected to be detected by the Advanced Laser Interferometer gravitational-wave observatory and Advanced Virgo. Detection searches operate by matched filtering the detector data using a bank of waveform templates. Traditionally, template banks for BBHs are constructed from intermediary analytical waveform models which are calibrated against numerical relativity simulations and which can be evaluated for any choice of BBH parameters. This paper explores an alternative to the traditional approach, namely, the construction of template banks directly from numerical BBH simulations. Using nonspinning BBH systems as an example, we demonstrate which regions of the mass-parameter plane can be covered with existing numerical BBH waveforms. We estimate the required number and required length of BBH simulations to cover the entire nonspinning BBH parameter plane up to mass ratio 10, thus illustrating that our approach can be used to guide parameter placement of future numerical simulations. We derive error bounds which are independent of analytical waveform models; therefore, our formalism can be used to independently test the accuracy of such waveform models. The resulting template banks are suitable for advanced LIGO searches.

A piezoelectric nanogenerator has been fabricated using a simple, fast and scalable template-assisted electrodeposition process, by which vertically aligned zinc oxide (ZnO) nanowires were directly grown within a nanoporous polycarbonate (PC) template. The nanowires, having average diameter 184 nm and length 12 μm, are polycrystalline and have a preferred orientation of the [100] axis parallel to the long axis. The output power density of a nanogenerator fabricated from the as-grown ZnO nanowires still embedded within the PC template was found to be 151 ± 25 mW m‑3 at an impedance-matched load, when subjected to a low-level periodic (5 Hz) impacting force akin to gentle finger tapping. An energy conversion efficiency of ∼4.2% was evaluated for the electrodeposited ZnO nanowires, and the ZnO–PC composite nanogenerator was found to maintain good energy harvesting performance through 24 h of continuous fatigue testing. This is particularly significant given that ZnO-based nanostructures typically suffer from mechanical and/or environmental degradation that otherwise limits their applicability in vibrational energy harvesting. Our template-assisted synthesis of ZnO nanowires embedded within a protective polymer matrix through a single growth process is thus attractive for the fabrication of low-cost, robust and stable nanogenerators.

The paper, based on the previous publication as special impact of Chinese medicine theories on supramolcular chemistry, aims to analyze the natural origination for the Chinese medicine and to explain the special impact of "Qi chromatography" reaction on "imprinting templates" in supramolcular host of human being with Chinese medicine, in order to reveal the CM's properties of "medical element" with "imprinting templates" autonomisation generally took place in natural supramolecules, and also to discover that the CM's pharmacology are satisfied with its own approaches different form western pharmacology. It was decided, for CM's pharmacology guided by CM's theories, to "Qi chromatography" relations between the CM's ingredient groups and the meridian zang-fu viscera. The supramolcular chemistry played an all-through role in procession of making macro-regularities and special presentation on behavior of "Qi chromatography" impulse owning to the matching action of all kinds of ingredients on the meridian zang-fu viscera with similar "imprinting templates". The CM's pharmacology were guided by CM's theories, owing to its interpretation of supramolecular chemistry. The pharmacology was achieved to construct up completely on base of classical chemical single molecular bonds whereas the CM's pharmacology be configured to big building by way of "imprinting templates" as multi-weak bonds among "supramolecular society". CM's pharmacology was supramolcular pharmacology dealt with "molecular society" on the base of western pharmacology, and employed to double research approaches both math-physical quantitative representation on macroscope and qualitative analyses in microscope. PMID:27071277

We have previously proposed a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors acting as templates for specific instances of self-motion. The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to neurons found in the middle temporal (MT) area of primate extrastriate visual cortex. The model detectors were designed to extract self-translation (heading), self-rotation, as well as the scene layout (relative distances) ahead of a moving observer, and are arranged in cortical-like heading maps to perform this function. Heading estimation from optic flow has been postulated by some to be implemented within the medial superior temporal (MST) area. Others have questioned whether MST neurons can fulfill this role because some of their receptive-field properties appear inconsistent with a role in heading estimation. To resolve this issue, we systematically compared MST single-unit responses with the outputs of model detectors under matched stimulus conditions. We found that the basic physiological properties of MST neurons can be explained by the template model. We conclude that MST neurons are well suited to support heading estimation and that the template model provides an explicit set of testable hypotheses which can guide future exploration of MST and adjacent areas within the primate superior temporal sulcus.

A piezoelectric nanogenerator has been fabricated using a simple, fast and scalable template-assisted electrodeposition process, by which vertically aligned zinc oxide (ZnO) nanowires were directly grown within a nanoporous polycarbonate (PC) template. The nanowires, having average diameter 184 nm and length 12 μm, are polycrystalline and have a preferred orientation of the [100] axis parallel to the long axis. The output power density of a nanogenerator fabricated from the as-grown ZnO nanowires still embedded within the PC template was found to be 151 ± 25 mW m(-3) at an impedance-matched load, when subjected to a low-level periodic (5 Hz) impacting force akin to gentle finger tapping. An energy conversion efficiency of ∼4.2% was evaluated for the electrodeposited ZnO nanowires, and the ZnO-PC composite nanogenerator was found to maintain good energy harvesting performance through 24 h of continuous fatigue testing. This is particularly significant given that ZnO-based nanostructures typically suffer from mechanical and/or environmental degradation that otherwise limits their applicability in vibrational energy harvesting. Our template-assisted synthesis of ZnO nanowires embedded within a protective polymer matrix through a single growth process is thus attractive for the fabrication of low-cost, robust and stable nanogenerators. PMID:27256619

Single photon emission computed tomography (SPECT) imaging with 201Tl or 99mTc agent is used to assess the location or the extent of myocardial infarction or ischemia. A method is proposed to decrease the effect of operator variability in the visual or quantitative interpretation of scintigraphic myocardial perfusion studies. To effect this, the patient's myocardial images (target cases) are registered automatically over a template image, utilizing a nonrigid transformation. The intermediate steps are: 1) Extraction of feature points in both stress and rest three-dimensional (3-D) images. The images are resampled in a polar geometry to detect edge points, which in turn are filtered by the use of a priori constraints. The remaining feature points are assumed to be points on the edges of the left ventricular myocardium. 2) Registration of stress and rest images with a global affine transformation. The matching method is an adaptation of the iterative closest point algorithm. 3) Registration and morphological matching of both stress and rest images on a template using a nonrigid local spline transformation following a global affine transformation. 4) Resampling of both stress and rest images in the geometry of the template. Optimization of the method was performed on a database of 40 pairs of stress and rest images selected to obtain a wide variation of images and abnormalities. Further testing was performed on 250 cases selected from the same database on the basis of the availability of angiographic results and patient stratification. PMID:9533574

Templates for fabricating sharply pointed microscopic peaks arranged in nearly regular planar arrays can be fabricated by a relatively inexpensive technique that has recently been demonstrated. Depending on the intended application, a semiconducting, insulating, or metallic film could be deposited on such a template by sputtering, thermal evaporation, pulsed laser deposition, or any other suitable conventional deposition technique. Pointed structures fabricated by use of these techniques may prove useful as photocathodes or field emitters in plasma television screens. Selected peaks could be removed from such structures and used individually as scanning tips in atomic force microscopy or mechanical surface profiling.

The reconstruction of emission tomography data is an ill-posed inverse problem and, as such, requires some form of regularization. Previous efforts to regularize the restoration process have incorporated rather general assumptions about the isotope distribution within a patient's body. Here, the authors present a theoretical and algorithmic framework in which the notion of a deformable template can be used to identify and quantify brain tumors in pediatric patients. Patient data and computer simulation experiments are presented which illustrate the performance of the deformable template approach to single photon emission computed tomography (SPECT).

This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.

Malonic acid, propionic acid, glycine, n-butylamine, and urea were added to the preparation of lanthanum phosphate from lanthanum nitrate and phosphoric acid solutions. All additives were taken into lanthanum phosphate particles. The additives that have a basic site were easy to contain in precipitates. The addition of templates improved the specific surface area of lanthanum phosphate. The amount of pore, with radius smaller than 4 nm, increased with the addition of templates. The remained additives had influence on the acidic properties of lanthanum phosphate.

We present a novel vertex finding technique. The task is formulated as a discrete-continuous optimisation problem in a way similar to the deformable templates approach for the track finding. Unlike the track finding problem, "elastic hedgehogs" rather than elastic arms are used as deformable templates. They are initialised by a set of procedures which provide zero level approximation for vertex positions and track parameters at the vertex point. The algorithm was evaluated using the simulated events for the LHC CMS detector and demonstrated good performance.

Precise preparation strategies are required to fabricate molecular nanostructures of specific arrangement. In bottom-up approaches, where nanostructures are gradually formed by piecing together individual parts to the final structure, the self-ordering mechanisms of the involved structures are utilized. In order to achieve the desired structures regarding morphology, grain size, and orientation of the individual moieties, templates can be applied, which influence the formation process of subsequent structures. However, this strategy is of limited use for complex architectures because the templates only influence the structure formation at the interface between the template and the first compound. Here, we discuss the implementation of so-called templatedtemplates and analyze to what extent orientations of the initial layers are inherited in the top layers of another compound to enable structural control in binary heterostructures. For that purpose, we prepared crystalline templates of the organic semiconductors pentacene and perfluoropentacene in different exclusive orientations. We observe that for templates of both individual materials the molecular orientation is inherited in the top layers of the respective counterpart. This behavior is also observed for various other molecules, indicating the robustness of this approach. PMID:26305339

...) Wax “Vesta” matches are matches that can be ignited by friction either on a prepared surface or on a solid surface. (c) Safety matches and wax “Vesta” matches must be tightly packed in securely closed... packaging with any material other than safety matches or wax “Vesta” matches, which must be packed...

...) Wax “Vesta” matches are matches that can be ignited by friction either on a prepared surface or on a solid surface. (c) Safety matches and wax “Vesta” matches must be tightly packed in securely closed... packaging with any material other than safety matches or wax “Vesta” matches, which must be packed...

...) Wax “Vesta” matches are matches that can be ignited by friction either on a prepared surface or on a solid surface. (c) Safety matches and wax “Vesta” matches must be tightly packed in securely closed... packaging with any material other than safety matches or wax “Vesta” matches, which must be packed...

...) Wax “Vesta” matches are matches that can be ignited by friction either on a prepared surface or on a solid surface. (c) Safety matches and wax “Vesta” matches must be tightly packed in securely closed... packaging with any material other than safety matches or wax “Vesta” matches, which must be packed...

...) Wax “Vesta” matches are matches that can be ignited by friction either on a prepared surface or on a solid surface. (c) Safety matches and wax “Vesta” matches must be tightly packed in securely closed... packaging with any material other than safety matches or wax “Vesta” matches, which must be packed...

In this paper we evaluated several typical matching costs including CENSUS, mutual information (MI) and the normalized cross correlation using the ISPRS Stereo Matching Benchmark datasets for DSM generation by stereo matching. Two kinds of global optimization algorithms including semi-global matching (SGM) and graph cuts (GC) were used as optimization method. We used a sub-pixel method to obtain more accurate MI lookup table and a sub-pixel method was also used when computing cost by MI lookup table. MI itself is sensitive to partial radiation differences. So we used a kind of cost combined MI and CENSUS. After DSM generation, the deviation data between the generated DSM and Lidar was statistics out to compute the mean deviation (Mean), the median deviation (Med), the standard deviation (Stdev), the normalized median absolute deviation (NMAD), the percentage of deviation in tolerance etc., which were used to evaluate the accuracy of DSM generated from different cost.

Electric matches are used in pyrotechnics to initiate devices electrically rather than by burning fuses. Fuses have the disadvantage of burning with a long delay before igniting a pyrotechnic device, while electric matches can instantaneously fire a device at a user's command. In addition, electric matches can be fired remotely at a safe distance. Unfortunately, most current commercial electric match compositions contain lead as thiocyanate, nitroresorcinate or tetroxide, which when burned, produces lead-containing smoke. This lead pollutant presents environmental exposure problems to cast, crew, and audience. The reason that these lead containing compounds are used as electric match compositions is that these mixtures have the required thermal stability, yet are simultaneously able to be initiated reliably by a very small thermal stimulus. A possible alternative to lead-containing compounds is nanoscale thermite materials (metastable intermolecular composites or MIC). These superthermite materials can be formulated to be extremely spark sensitive with tunable reaction rate and yield high temperature products. We have formulated and manufactured lead-free electric matches based on nanoscale Al/MoO{sub 3} mixtures. We have determined that these matches fire reliably and to consistently ignite a sample of black powder. Initial safety, ageing and performance results are presented in this paper.

The potential for realizing cost savings in the disposal of defense high-level waste through process and design modificatins has been considered. Proposed modifications range from simple changes in the canister design to development of an advanced melter capable of processing glass with a higher waste loading. Preliminary calculations estimate the total disposal cost (not including capital or operating costs) for defense high-level waste to be about $7.9 billion dollars for the reference conditions described in this paper, while projected savings resulting from the proposed process and design changes could reduce the disposal cost of defense high-level waste by up to $5.2 billion.

The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

highly sheared regions and in the 3D turbulent regions. The highlevel of correlation between the estimated error and the actual error indicates that this new approach can be utilized to directly infer the measurement uncertainty from PIV data. A procedure is shown where the results of the error estimation are employed to minimize the measurement uncertainty by selecting the optimal interrogation window size.

Experimental results are presented for the verification of the specific interaction step of the 'adsorbed template' biogeochemical cycle, a simple model for a primitive prebiotic replication system. The experimental system consisted of gypsum as the mineral to which an oligonucleotide template attaches (Poly-C or Poly-U) and (5-prime)-AMP, (5-prime)-GMP, (5-prime)-CMP and (5-prime)-UMP as the interacting biomonomers. When Poly-C or Poly-U were used as adsorbed templates, (5-prime)-GMP and (5-prime)-AMP, respectively, were observed to be the most strongly adsorbed species.

This paper analyzes an informal financial institution that brings heterogeneous agents together in groups. We analyze decentralized matching into these groups, and the equilibrium composition of participants that consequently arises. We find that participants sort remarkably well across the competing groups, and that they re-sort immediately following an unexpected exogenous regulatory change. These findings suggest that the competitive matching model might have applicability and bite in other settings where matching is an important equilibrium phenomenon. (JEL: O12, O17, G20, D40) PMID:24027491

The failure frequency of Type I and Type II HighLevel Waste tanks was calculated. The degradation mechanism that could lead to large break failure and the credits taken for steps taken to prevent large break failure were considered.

Federal regulatory criteria for geologic disposal of high-level waste are under development. Also, interim performance specifications for high-level waste forms in geologic isolation are being developed within the Federal program responsible for repository selection and operation. Two high-level waste forms, borosilicate glass and crystalline ceramic, have been selected as candidate immobilization forms for the Defense Waste Processing Facility (DWPF) which is to immobilize high-level wastes at the Savannah River Plant (SRP). An assessment of how these two waste forms conform with the proposed regulatory criteria and repository specifications was performed. Both forms were determined to be in conformance with postulated rules for radionuclide releases and radiation exposures throughout the entire waste disposal system, as well as with proposed repository operation requirements.

This report details the experimental effort to demonstrate the continuous precipitation of cesium from Savannah River Site HighLevel Waste using sodium tetraphenylborate. In addition, the experiments examined the removal of strontium and various actinides through addition of monosodium titanate.

Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

In this paper, we evaluate high-level clouds in a cloud resolving model during two convective cases, ARM9707 and KWAJEX. The simulated joint histograms of cloud occurrence and radar reflectivity compare well with cloud radar and satellite observations when using a two-moment microphysics scheme. However, simulations performed with a single moment microphysical scheme exhibit low biases of approximately 20 dB. During convective events, two-moment microphysical overestimate the amount of high-level cloud and one-moment microphysics precipitate too readily and underestimate the amount and height of high-level cloud. For ARM9707, persistent large positive biases in high-level cloud are found, which are not sensitive to changes in ice particle fall velocity and ice nuclei number concentration in the two-moment microphysics. These biases are caused by biases in large-scale forcing and maintained by the periodic lateral boundary conditions. The combined effects include significant biases in high-level cloud amount, radiation, and high sensitivity of cloud amount to nudging time scale in both convective cases. The high sensitivity of high-level cloud amount to the thermodynamic nudging time scale suggests that thermodynamic nudging can be a powerful ‘‘tuning’’ parameter for the simulated cloud and radiation but should be applied with caution. The role of the periodic lateral boundary conditions in reinforcing the biases in cloud and radiation suggests that reducing the uncertainty in the large-scale forcing in highlevels is important for similar convective cases and has far reaching implications for simulating high-level clouds in super-parameterized global climate models such as the multiscale modeling framework.

In this paper, we evaluate high-level clouds in a cloud resolving model during two convective cases, ARM9707 and KWAJEX. The simulated joint histograms of cloud occurrence and radar reflectivity compare well with cloud radar and satellite observations when using a two-moment microphysics scheme. However, simulations performed with a single moment microphysical scheme exhibit low biases of approximately 20 dB. During convective events, two-moment microphysical overestimate the amount of high-level cloud and one-moment microphysics precipitate too readily and underestimate the amount and height of high-level cloud. For ARM9707, persistent large positive biases in high-level cloud are found, which are not sensitivemore » to changes in ice particle fall velocity and ice nuclei number concentration in the two-moment microphysics. These biases are caused by biases in large-scale forcing and maintained by the periodic lateral boundary conditions. The combined effects include significant biases in high-level cloud amount, radiation, and high sensitivity of cloud amount to nudging time scale in both convective cases. The high sensitivity of high-level cloud amount to the thermodynamic nudging time scale suggests that thermodynamic nudging can be a powerful ‘‘tuning’’ parameter for the simulated cloud and radiation but should be applied with caution. The role of the periodic lateral boundary conditions in reinforcing the biases in cloud and radiation suggests that reducing the uncertainty in the large-scale forcing in highlevels is important for similar convective cases and has far reaching implications for simulating high-level clouds in super-parameterized global climate models such as the multiscale modeling framework.« less

This report presents a study of alternative system architectures to provide onsite interim storage for the immobilized high-level waste produced by the Tank Waste Remediation System (TWRS) privatization vendor. It examines the contract and program changes that have occurred and evaluates their impacts on the baseline immobilized high-level waste (IHLW) interim storage strategy. In addition, this report documents the recommended initial interim storage architecture and implementation path forward.

In this study, we evaluate high-level clouds in a cloud resolving model during two convective cases, ARM9707 and KWAJEX. The simulated joint histograms of cloud occurrence and radar reflectivity compare well with cloud radar and satellite observations when using a two-moment microphysics scheme. However, simulations performed with a single moment microphysical scheme exhibit low biases of approximately 20 dB. During convective events, two-moment microphysical overestimate the amount of high-level cloud and one-moment microphysics precipitate too readily and underestimate the amount and height of high-level cloud. For ARM9707, persistent large positive biases in high-level cloud are found, which are not sensitive to changes in ice particle fall velocity and ice nuclei number concentration in the two-moment microphysics. These biases are caused by biases in large-scale forcing and maintained by the periodic lateral boundary conditions. The combined effects include significant biases in high-level cloud amount, radiation, and high sensitivity of cloud amount to nudging time scale in both convective cases. The high sensitivity of high-level cloud amount to the thermodynamic nudging time scale suggests that thermodynamic nudging can be a powerful "tuning" parameter for the simulated cloud and radiation but should be applied with caution. The role of the periodic lateral boundary conditions in reinforcing the biases in cloud and radiation suggests that reducing the uncertainty in the large-scale forcing in highlevels is important for similar convective cases and has far reaching implications for simulating high-level clouds in super-parameterized global climate models such as the multiscale modeling framework.

Templates used to correlate defects in castings with local wall thicknesses. Placed on part to be inspected after coated with penetrant dye. Positions of colored spots (indicative of defects) noted. Ultrasonic inspector measures thickness of wall at unacceptable defects only - overall inspection not necessary.

A method is given for incorporating diverse varieties of intercalants or templates directly during hydrothermal synthesis of clays such as hectorite or montmorillonite-type layer-silicate clays. For a hectorite layer-silicate clay, refluxing a gel of silica sol, magnesium hydroxide sol and LiF for 2 days with an organic or organometallic intercalant or template results in crystalline products containing either (a) organic dye molecules such as ethyl violet and methyl green, (b) dye molecules such as alcian blue based on a Cu(II)-phthalocyannine complex, or (c) transition metal complexes such as Ru(II)phenanthroline and Co(III)sepulchrate or (d) water-soluble porphyrins and metalloporphyrins. Montmorillonite-type clays are made by the method taught by US patent No. 3,887,454 issued to Hickson, June 13, 1975; however, a variety of intercalants or templates may be introduced. The intercalants or templates should have water-solubility, positive charge, and thermal stability under moderately basic (pH 9-10) aqueous reflux conditions or hydrothermal pressurized conditions for the montmorillonite-type clays.

A method is described for incorporating diverse varieties of intercalates or templates directly during hydrothermal synthesis of clays such as hectorite or montmorillonite-type layer-silicate clays. For a hectorite layer-silicate clay, refluxing a gel of silica sol, magnesium hydroxide sol and lithium fluoride for two days in the presence of an organic or organometallic intercalate or template results in crystalline products containing either (a) organic dye molecules such as ethyl violet and methyl green, (b) dye molecules such as alcian blue that are based on a Cu(II)-phthalocyannine complex, or (c) transition metal complexes such as Ru(II)phenanthroline and Co(III)sepulchrate or (d) water-soluble porphyrins and metalloporphyrins. Montmorillonite-type clays are made by the method taught by U.S. Pat. No. 3,887,454 issued to Hickson, Jun. 13, 1975; however, a variety of intercalates or templates may be introduced. The intercalates or templates should have (i) water-solubility, (ii) positive charge, and (iii) thermal stability under moderately basic (pH 9-10) aqueous reflux conditions or hydrothermal pressurized conditions for the montmorillonite-type clays. 22 figures.

A method for incorporating diverse Varieties of intercalants or templates directly during hydrothermal synthesis of clays such as hectorite or montmorillonite-type layer-silicate clays. For a hectorite layer-silicate clay, refluxing a gel of silica sol, magnesium hydroxide sol and lithium fluoride for two days in the presence of an organic or organometallic intercalant or template results in crystalline products containing either (a) organic dye molecules such as ethyl violet and methyl green, (b) dye molecules such as alcian blue that are based on a Cu(II)-phthalocyannine complex, or (c) transition metal complexes such as Ru(II)phenanthroline and Co(III)sepulchrate or (d) water-soluble porphyrins and metalloporphyrins. Montmorillonite-type clays are made by the method taught by U.S. Pat. No. 3,887,454 issued to Hickson, Jun. 13, 1975; however, a variety of intercalants or templates may be introduced. The intercalants or templates should have (i) water-solubility, (ii) positive charge, and (iii) thermal stability under moderately basic (pH 9-10) aqueous reflux conditions or hydrothermal pressurized conditions for the montmorillonite-type clays.

Privacy and security are vital concerns for practical biometric systems. The concept of cancelable or revocable biometrics has been proposed as a solution for biometric template security. Revocable biometric means that biometric templates are no longer fixed over time and could be revoked in the same way as lost or stolen credit cards are. In this paper, we describe a novel and an efficient approach to biometric template protection that meets the revocability property. This scheme can be incorporated into any biometric verification scheme while maintaining, if not improving, the accuracy of the original biometric system. However, we shall demonstrate the result of applying such transforms on face biometric templates and compare the efficiency of our approach with that of the well-known random projection techniques. We shall also present the results of experimental work on recognition accuracy before and after applying the proposed transform on feature vectors that are generated by wavelet transforms. These results are based on experiments conducted on a number of well-known face image databases, e.g. Yale and ORL databases.

Human face detection might be driven by skin-coloured face-shaped templates. To explore this idea, this study compared the detection of faces for which the natural height-to-width ratios were preserved with distorted faces that were stretched vertically or horizontally. The impact of stretching on detection performance was not obvious when faces were equated to their unstretched counterparts in terms of their height or width dimension (Experiment 1). However, stretching impaired detection when the original and distorted faces were matched for their surface area (Experiment 2), and this was found with both vertically and horizontally stretched faces (Experiment 3). This effect was evident in accuracy, response times, and also observers' eye movements to faces. These findings demonstrate that height-to-width ratios are an important component of the cognitive template for face detection. The results also highlight important differences between face detection and face recognition. PMID:25727491

Nanoscale palladium clusters in the form of parallel strips have been formed on the surface of graphite with the help of a surface micellar template of cetyltrimethylammonium bromide using a chemical deposition method. The repeat period of the palladium strips deposited at 25 °C is 65 nm, with a width of 40 nm and height of 2 nm. The elemental composition of the metal clusters was confirmed using X-ray fluorescence analysis and TEM-EDX. The fact that the strips are composed of metallic palladium was also confirmed by testing the membrane electrode assembly with the strips in a commercial fuel cell. Using the obtained micellar template, the radius of the curvature of the AFM probe tip was estimated with the help of a unique method. The radius is equal to 10 nm and matches the value provided by the manufacturer. PMID:27315147

Optically transparent wood (TW) with transmittance as high as 85% and haze of 71% was obtained using a delignified nanoporous wood template. The template was prepared by removing the light-absorbing lignin component, creating nanoporosity in the wood cell wall. Transparent wood was prepared by successful impregnation of lumen and the nanoscale cellulose fiber network in the cell wall with refractive-index-matched prepolymerized methyl methacrylate (MMA). During the process, the hierarchical wood structure was preserved. Optical properties of TW are tunable by changing the cellulose volume fraction. The synergy between wood and PMMA was observed for mechanical properties. Lightweight and strong transparent wood is a potential candidate for lightweight low-cost, light-transmitting buildings and transparent solar cell windows. PMID:26942562

Computational templates used to teach an introductory course in nuclear chemistry and physics at Washington University in St. Louis are presented in brief. The templates cover both basic and applied topics.

A method of clustering using a novel template to define a region of influence is disclosed. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques. 30 figs.

A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

This invention is comprised of a method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

This project will identify and model mechanisms that template the self-assembly of nanostructures. We focus on a class of systems involving a two-phase monolayer of molecules adsorbed on a solid surface. At a suitably elevated temperature, the molecules diffuse on the surface to reduce the combined free energy of mixing, phase boundary, elastic field, and electrostatic field. With no template, the phases may form a pattern of stripes or disks. The feature size is on the order of 1-100 nm, selected to compromise the phase boundary energy and the long-range elastic or electrostatic interaction. Both experimental observations and our theoretical simulations have shown that the pattern resembles a periodic lattice, but has abundant imperfections. To form a perfect periodic pattern, or a designed aperiodic pattern, one must introduce a template to guide the assembly. For example, a coarse-scale pattern, lithographically defined on the substrate, will guide the assembly of the nanoscale pattern. As another example, if the molecules on the substrate surface carry strong electric dipoles, a charged object, placed in the space above the monolayer, will guide the assembly of the molecular dipoles. In particular, the charged object can be a mask with a designed nanoscale topographic pattern. A serial process (e.g., e-beam lithography) is necessary to make the mask, but the pattern transfer to the molecules on the substrate is a parallel process. The technique is potentially a high throughput, low cost process to pattern a monolayer. The monolayer pattern itself may serve as a template to fabricate a functional structure. This project will model fundamental aspects of these processes, including thermodynamics and kinetics of self-assembly, templated self-assembly, and self-assembly on unconventional substrates. It is envisioned that the theory will not only explain the available experimental observations, but also motivate new experiments.

Reviews template mining research and shows how templates are used in World Wide Web search engines and metasearch engines for helping end-users generate natural language search expressions. Potential areas of application of template mining for extraction of information from digital documents are highlighted, and how such applications are used is…

By analyzing the characteristics of maternal abdominal ECG (Electrocardiogram), a method based on wavelet transform and matched filtering is proposed to detect the R-wave in fetal EGG (FECG). In this method, the high-frequency coefficients are calculated by using wavelet transform. First, the maternal QRS template is obtained by using the arithmetic mean scheme. Finally, the R-wave of FECG is detected based on matched filtering. The experimental results show that this method can effectively eliminate the noises, such as the maternal ECG signal and baseline drift, enhancing the accuracy of the detection of fetal ECG. PMID:26904869

KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on templatematching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.

We describe algorithms for discovering immunophenotypes from large collections of flow cytometry samples and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations’ characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters), a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class) that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples while ignoring noise and small sample-specific variations. We have applied the template-based scheme to analyze several datasets, including one representing a healthy immune system and one of acute myeloid leukemia (AML) samples. The last task is challenging due to the phenotypic heterogeneity of the several subtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML and were able to distinguish acute promyelocytic leukemia (APL) samples with the markers provided. Clinically, this is helpful since APL has a different treatment regimen from other subtypes of AML. Core algorithms used in our data analysis are

Biopharmaceuticals hold great promise for the future of drug discovery. Nevertheless, rational drug design strategies are mainly focused on the discovery of small synthetic molecules. Herein we present matched peptides, an innovative analysis technique for biological data related to peptide and protein sequences. It represents an extension of matched molecular pair analysis toward macromolecular sequence data and allows quantitative predictions of the effect of single amino acid substitutions on the basis of statistical data on known transformations. We demonstrate the application of matched peptides to a data set of major histocompatibility complex class II peptide ligands and discuss the trends captured with respect to classical quantitative structure–activity relationship approaches as well as structural aspects of the investigated protein–peptide interface. We expect our novel readily interpretable tool at the interface of cheminformatics and bioinformatics to support the rational design of biopharmaceuticals and give directions for further development of the presented methodology. PMID:26501781

The polymerase chain reaction (PCR) is sensitive to mismatches between primer and template, and mismatches can lead to inefficient amplification of targeted regions of DNA template. In PCRs in which a degenerate primer pool is employed, each primer can behave differently. Therefore, inefficiencies due to different primer melting temperatures within a degenerate primer pool, in addition to mismatches between primer binding sites and primers, can lead to a distortion of the true relative abundance of targets in the original DNA pool. A theoretical analysis indicated that a combination of primer-template and primer-amplicon interactions during PCR cycles 3–12 is potentially responsible for this distortion. To test this hypothesis, we developed a novel amplification strategy, entitled “Polymerase-exonuclease (PEX) PCR”, in which primer-template interactions and primer-amplicon interactions are separated. The PEX PCR method substantially and significantly improved the evenness of recovery of sequences from a mock community of known composition, and allowed for amplification of templates with introduced mismatches near the 3’ end of the primer annealing sites. When the PEX PCR method was applied to genomic DNA extracted from complex environmental samples, a significant shift in the observed microbial community was detected. Furthermore, the PEX PCR method provides a mechanism to identify which primers in a primer pool are annealing to target gDNA. Primer utilization patterns revealed that at high annealing temperatures in the PEX PCR method, perfect match annealing predominates, while at lower annealing temperatures, primers with up to four mismatches with templates can contribute substantially to amplification. The PEX PCR method is simple to perform, is limited to PCR mixes and a single exonuclease step which can be performed without reaction cleanup, and is recommended for reactions in which degenerate primer pools are used or when mismatches between primers

Models which describe road traffic patterns can be helpful in detection and/or prevention of uncommon and dangerous situations. Such models can be built by the use of motion detection algorithms applied to video data. Block matching is a standard technique for encoding motion in video compression algorithms. We explored the capabilities of the block matching algorithm when applied for object tracking. The goal of our experiments is two-fold: (1) to explore the abilities of the block matching algorithm on low resolution and low frame rate video and (2) to improve the motion detection performance by the use of different search techniques during the process of block matching. Our experiments showed that the block matching algorithm yields good object tracking results and can be used with high success on low resolution and low frame rate video data. We observed that different searching methods have small effect on the final results. In addition, we proposed a technique based on frame history, which successfully overcame false motion caused by small camera movements.

The prefrontal cortex exerts top-down influences on several aspects of higher-order cognition by functioning as a filtering mechanism that biases bottom-up sensory information toward a response that is optimal in context. However, research also indicates that not all aspects of complex cognition benefit from prefrontal regulation. Here we review and synthesize this research with an emphasis on the domains of learning and creative cognition, and outline how the appropriate level of cognitive control in a given situation can vary depending on the organism's goals and the characteristics of the given task. We offer a Matched Filter Hypothesis for cognitive control, which proposes that the optimal level of cognitive control is task-dependent, with highlevels of cognitive control best suited to tasks that are explicit, rule-based, verbal or abstract, and can be accomplished given the capacity limits of working memory and with low levels of cognitive control best suited to tasks that are implicit, reward-based, non-verbal or intuitive, and which can be accomplished irrespective of working memory limitations. Our approach promotes a view of cognitive control as a tool adapted to a subset of common challenges, rather than an all-purpose optimization system suited to every problem the organism might encounter. PMID:24200920

The prefrontal cortex exerts top-down influences on several aspects of higher-order cognition by functioning as a filtering mechanism that biases bottom-up sensory information toward a response that is optimal in context. However, research also indicates that not all aspects of complex cognition benefit from prefrontal regulation. Here we review and synthesize this research with an emphasis on the domains of learning and creative cognition, and outline how the appropriate level of cognitive control in a given situation can vary depending on the organism's goals and the characteristics of the given task. We offer a matched filter hypothesis for cognitive control, which proposes that the optimal level of cognitive control is task-dependent, with highlevels of cognitive control best suited to tasks that are explicit, rule-based, verbal or abstract, and can be accomplished given the capacity limits of working memory and with low levels of cognitive control best suited to tasks that are implicit, reward-based, non-verbal or intuitive, and which can be accomplished irrespective of working memory limitations. Our approach promotes a view of cognitive control as a tool adapted to a subset of common challenges, rather than an all-purpose optimization system suited to every problem the organism might encounter. PMID:24200920

In this dissertation, we demonstrate the fabrication of high fidelity 3D photonic crystal through polymer template fabrication, backfilling and template removal to obtain high index inversed inorganic photonic crystals (PCs). Along the line, we study the photoresist chemistry to minimize the shrinkage, backfilling strategies for complete infiltration, and template removal at high and low temperatures to minimize crack-formation. Using multibeam interference lithography (MBIL), we fabricate diamond-like photonic structures from commercially available photoresist, SU-8, epoxy functionalized polyhedral oligomeric silsesquioxane (POSS), and narrowly distributed poly(glycidyl methacrylate)s (PGMA). The 3D structure from PGMA shows the lowest shrinkage in the [111] direction, 18%, compared to those fabricated from the SU-8 (41%) and POSS (48%) materials under the same conditions. To fabricate a photonic crystal with large and complete photonic bandgap, it often requires backfilling of high index inorganic materials into a 3D polymer template. We have studied different backfilling methods to create three different types of high index, inorganic 3D photonic crystals. Using SU-8 structures as templates, we systematically study the electrodeposition technique to create inversed 3D titania crystals. We find that 3D SU-8 template is completely infiltrated with titania sol-gel through a two-stage process: a conformal coating of a thin layer of films occurs at the early electrodeposition stage (< 60 min), followed by bottom-up deposition. After calcination at 500°C to remove the polymer template, inversed 3D titania crystals are obtained. The optical properties of the 3D photonic crystals characterized at various processing steps matches with the simulated photonic bandgaps (PBGs) and the SEM observation, further supporting the complete filling by the wet chemistry. Since both PGMA and SU-8 decompose at a temperature above 400°C, leading to the formation of defects and cracks

The scope and limits of unconscious processing are a matter of ongoing debate. Lately, continuous flash suppression (CFS), a technique for suppressing visual stimuli, has been widely used to demonstrate surprisingly high-level processing of invisible stimuli. Yet, recent studies showed that CFS might actually allow low-level features of the stimulus to escape suppression and be consciously perceived. The influence of such low-level awareness on high-level processing might easily go unnoticed, as studies usually only probe the visibility of the feature of interest, and not that of lower-level features. For instance, face identity is held to be processed unconsciously since subjects who fail to judge the identity of suppressed faces still show identity priming effects. Here we challenge these results, showing that such high-level priming effects are indeed induced by faces whose identity is invisible, but critically, only when a lower-level feature, such as color or location, is visible. No evidence for identity processing was found when subjects had no conscious access to any feature of the suppressed face. These results suggest that high-level processing of an image might be enabled by-or co-occur with-conscious access to some of its low-level features, even when these features are not relevant to the processed dimension. Accordingly, they call for further investigation of lower-level awareness during CFS, and reevaluation of other unconscious high-level processing findings. PMID:26756173

Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructure which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.

Present determination of optical imaging systems specifications are based on performance values and modulation transfer function results carried with a 1D resolution template (such as the USAF resolution target or spoke templates). Such a template allows determining image quality, resolution limit, and contrast. Nevertheless, the conventional 1D template does not provide satisfactory results, since most optical imaging systems handle 2D objects for which imaging system response may be different by virtue of some not readily observable spatial frequencies. In this paper we derive and analyze contrast transfer function results obtained with 1D as well as 2D templates. PMID:22614498

Purpose: We investigate the feasibility of choosing from a small set of standardized templates of beam bouquets (i.e., entire beam configuration settings) for lung IMRT planning to improve planning efficiency and quality consistency, and also to facilitate automated planning. Methods: A set of beam bouquet templates is determined by learning from the beam angle settings in 60 clinical lung IMRT plans. A k-medoids cluster analysis method is used to classify the beam angle configuration into clusters. The value of the average silhouette width is used to determine the ideal number of clusters. The beam arrangements in each medoid of the resulting clusters are taken as the standardized beam bouquet for the cluster, with the corresponding case taken as the reference case. The resulting set of beam bouquet templates was used to re-plan 20 cases randomly selected from the database and the dosimetric quality of the plans was evaluated against the corresponding clinical plans by a paired t-test. The template for each test case was manually selected by a planner based on the match between the test and reference cases. Results: The dosimetric parameters (mean±S.D. in percentage of prescription dose) of the plans using 6 beam bouquet templates and those of the clinical plans, respectively, and the p-values (in parenthesis) are: lung Dmean: 18.8±7.0, 19.2±7.0 (0.28), esophagus Dmean: 32.0±16.3, 34.4±17.9 (0.01), heart Dmean: 19.2±16.5, 19.4±16.6 (0.74), spinal cord D2%: 47.7±18.8, 52.0±20.3 (0.01), PTV dose homogeneity (D2%-D99%): 17.1±15.4, 20.7±12.2 (0.03).The esophagus Dmean, cord D02 and PTV dose homogeneity are statistically better in the plans using the standardized templates, but the improvements (<5%) may not be clinically significant. The other dosimetric parameters are not statistically different. Conclusion: It's feasible to use a small number of standardized beam bouquet templates (e.g. 6) to generate plans with quality comparable to that of clinical

This paper presents an approach to the local stereo matching problem using edge segments as features with several attributes. We have verified that the differences in attributes for the true matches cluster in a cloud around a center. The correspondence is established on the basis of the minimum distance criterion, computing the Mahalanobis distance between the difference of the attributes for a current pair of features and the cluster center (similarity constraint). We introduce a learning strategy based on the Hebbian Learning to get the best cluster center. A comparative analysis among methods without learning and with other learning strategies is illustrated. PMID:18252332

Pattern matching is a machine learning area that requires high-performance hardware. It has been hypothesized that massively parallel designs, which avoid von Neumann architecture, could provide a significant performance boost. Such designs can advantageously use memristive switches. This paper discusses a two-stage design that implements the induced ordered weighted average (IOWA) method for pattern matching. We outline the circuit structure and discuss how a functioning circuit can be achieved using metal oxide devices. We describe our simulations of memristive circuits and illustrate their performance on a vowel classification task.

Porous polymeric media (polymer foams) are utilized in a wide range of applications, such as thermal and mechanical insulators, solid supports for catalysis, and medical devices. A process for the production of polymer foams has been developed. This process, which is applicable to a wide range of polymers, uses a hydrocarbon particulate phase as a template for the precipitation of the polymer phase and subsequent pore formation. The use of a hydrocarbon template allows for enhanced control over pore structure, porosity, and other structural and bulk characteristics of the polymer foam. Polymer foams with densities as low as 120 mg/cc, porosity as high as 87%, and high surface areas (20 m(2)/g) have been produced. Foams of poly(l-lactic acid), a biodegradable polymer, produced by this process have been used to engineer a variety of different structures, including tissues with complex geometries such as in the likeness of a human nose. PMID:10696111

Calculating well logs is a time-consuming process. This template uses input parameters consisting of well name, location county, state, formation name, starting depth, repeat interval, resistivity of shale, and irreducible bulk volume water, which provides heading information for print outs. Required information from basic well logs are porosity, conductivity (optional), formation resistivity, resistivity of the formation water for the zone being calculated, resistivity of the mud filtrate, the porosity cutoff for pay in the zone being calculated, and the saltwater saturation cutoff for the pay zone. These parameters are used to calculate apparent water resistivity, saltwater saturation, bulk volume water, ratio of apparent water resistivity to input water resistivity, irreducible saltwater saturation, resistivity volume of shale, permeability, and a derived porosity value. A print out of the results is available through the lotus print function. Using this template allows maximum control of the input parameters and reduces hand calculation time.

Highlevels of hostility present a formidable challenge among homeless ex-offenders. This cross-sectional study assessed correlates of highlevels of hostility using baseline data collected on recently-released male parolees (N=472; age 18-60) participating in a randomized trial focused on prevention of illicit drug use and recidivism. Predictors of highlevels of hostility included greater depressive symptomatology, lower self-esteem, having a mother who was treated for alcohol/drugs, belonging to a gang, more tangible support, having used methamphetamine and having a history of cognitive difficulties. These findings highlight the need to understand predictors of hostility among recently released homeless men and how these predictors may relate to recidivism. Research implications are discussed as these findings will shape future nurse-led harm reduction and community-based interventions. PMID:25083121

The Replacement HighLevel Waste Evaporator Project was conceived in 1985 to reduce the volume of the highlevel radioactive waste Process of the highlevel waste has been accomplished up to this time using Bent Tube type evaporators and therefore, that type evaporator was selected for this project. The Title I Design of the project was 70% completed in late 1990. The Department of Energy at that time hired an independent consulting firm to perform a complete review of the project. The DOE placed a STOP ORDER on purchasing the evaporator in January 1991. Essentially, no construction was to be done on this project until all findings and concerns dealing with the type and design of the evaporator are resolved. This report addresses two aspects of the DOE design review; (1) Comparing the Bent Tube Evaporator with the Forced Circulation Evaporator, (2) The design portion of the DOE Project Review - concentrated on the mechanical design properties of the evaporator. 1 ref.

The Replacement HighLevel Waste Evaporator Project was conceived in 1985 to reduce the volume of the highlevel radioactive waste Process of the highlevel waste has been accomplished up to this time using Bent Tube type evaporators and therefore, that type evaporator was selected for this project. The Title I Design of the project was 70% completed in late 1990. The Department of Energy at that time hired an independent consulting firm to perform a complete review of the project. The DOE placed a STOP ORDER on purchasing the evaporator in January 1991. Essentially, no construction was to be done on this project until all findings and concerns dealing with the type and design of the evaporator are resolved. This report addresses two aspects of the DOE design review; (1) Comparing the Bent Tube Evaporator with the Forced Circulation Evaporator, (2) The design portion of the DOE Project Review - concentrated on the mechanical design properties of the evaporator. 1 ref.

The Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS) in Aiken, SC mg began immobilizing high-level radioactive waste in borosilicate glass in 1996. Currently, the radioactive glass is being produced as a ``sludge-only`` composition by combining washed high-level waste sludge with glass frit. The glass is poured in stainless steel canisters which will eventually be disposed of in a permanent, geological repository. To date, DWPF has produced about 100 canisters of vitrified waste. Future processing operations will, be based on a ``coupled`` feed of washed high-level waste sludge, precipitated cesium, and glass frit. This paper provides an update of the processing activities completed to date, operational/flowsheet problems encountered, and programs underway to increase production rates.

Near-field temperatures resulting from the storage of high-level waste canisters and spent unreprocessed fuel assembly canisters in geologic formations were determined. Preliminary design of the repository was modeled for a heat transfer computer code, HEATING5, which used the finite difference method to evaluate transient heat transfer. The heat transfer system was evaluated with several two and three dimensional models which transfer heat by a combination of conduction, natural convention, and radiation. Physical properties of the materials in the model were based upon experimental values for the various geologic formations. The effects of canister spacing, fuel age, and use of an overpack were studied for the analysis of the spent fuel canisters; salt, granite, and basalt were considered as the storage media. The effects of canister diameter and use of an overpack were studied for the analysis of the high-level waste canisters; salt was considered as the only storage media for high-level waste canisters.

Nickel ferrocyanide compounds (Na{sub 2-x}Cs{sub x}NiFe (CN){sub 6}) were produced in a scavenging process to remove {sup 137}Cs from Hanford Site single-shell tank waste supernates. Methods for determining total cyanide in Hanford Site high-level wastes are needed for the evaluation of potential exothermic reactions between cyanide and oxidizers such as nitrate and for safe storage, processing, and management of the wastes in compliance with regulatory requirements. Hanford Site laboratory experience in determining cyanide in high-level wastes is summarized. Modifications were made to standard cyanide methods to permit improved handling of high-level waste samples and to eliminate interferences found in Hanford Site waste matrices. Interferences and associated procedure modifications caused by high nitrates/nitrite concentrations, insoluble nickel ferrocyanides, and organic complexants are described.

Staphylococcus aureus is an important cause of both hospital- and community-associated methicillin-resistant S. aureus (MRSA) infections worldwide. β-Lactam antibiotics are the drugs of choice to treat S. aureus infections, but resistance to these and other antibiotics make treatment problematic. High-level β-lactam resistance of S. aureus has always been attributed to the horizontally acquired penicillin binding protein 2a (PBP 2a) encoded by the mecA gene. Here, we show that S. aureus can also express high-level resistance to β-lactams, including new-generation broad-spectrum cephalosporins that are active against methicillin-resistant strains, through an uncanonical core genome-encoded penicillin binding protein, PBP 4, a nonessential enzyme previously considered not to be important for staphylococcal β-lactam resistance. Our results show that PBP 4 can mediate high-level resistance to β-lactams. PMID:27067335