Sample records for hierarchical volume representation

Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

Abstracts: Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically do not account for this property. In this talk, I will discuss a new approach for learning hierarchicalrepresentations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability. &...

A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

Full Text Available Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

AFRL-RY-WP-TR-2016-0123 MATHEMATICS OF SENSING, EXPLOITATION, AND EXECUTION (MSEE) HierarchicalRepresentations for the Evaluation of Sensed...December 2015 4. TITLE AND SUBTITLE MATHEMATICS OF SENSING, EXPLOITATION, AND EXECUTION (MSEE) HierarchicalRepresentations for the Evaluation of...8-98) Prescribed by ANSI Std. Z39-18 HierarchicalRepresentations for the Evaluation of Sensed Data Final Report Mathematics of Sensing

A method to represent and utilize plant constitution knowledge is described. A plant system is divided into many subsystems and hierarchically represented using frames. The frames include the slots of an upper-system, lower-systems and components' connections. Connections are divided into subsystems external connections and internal connections. This knowledge representation allows top-down analysis of the plant constitution and components' connectivities. The data are edited by drawing plant diagrams on a CRT and converting them into frames. The frame data area verified by checking upper-lower relationships and components' connectivities. As an example of knowledge utilization a method to find a components' connection route is described. This method prevents the combinatorial explosion of components' connections by finding rough routes in advance of detailed route analysis

For a number of years, robotics researchers have exploited hierarchicalrepresentations of geometrical objects and scenes in motion-planning, collision-avoidance, and simulation. However, few general techniques exist for automatically constructing them. We present a generic, bottom-up algorithm that uses a heuristic clustering technique to produced balanced, coherent hierarchies. Its worst-case running time is O(N 2 logN), but for non-pathological cases it is O(NlogN), where N is the number of input primitives. We have completed a preliminary C++ implementation for input collections of 3D convex polygons and 3D convex polyhedra and conducted simple experiments with scenes of up to 12,000 polygons, which take only a few minutes to process. We present examples using spheres and convex hulls as hierarchy primitives

A general representation approach is described which employs a hierarchy of holes and notches. A matching procedure is also described which allows non-ideal image hierarchies to be matched to class representations. The representation and matching methods are demonstrated on a set of handgun photographs. Examples of handguns which are different in detail are shown to exhibit the same class characteristics, while other similarly shaped objects are correctly distinguished from the handgun class. 6 refs., 8 figs.

We present a system for detecting the pose of rigid objects using texture and contour information. From a stereo image view of a scene, a sparse hierarchical scene representation is reconstructed using an early cognitive vision system. We define an object model in terms of a simple context...

PDE with stochastic data usually lead to very high-dimensional algebraic problems which easily become unfeasible for numerical computations because of the dense coupling structure of the discretised stochastic operator. Recently, an adaptive stochastic Galerkin FEM based on a residual a posteriori error estimator was presented and the convergence of the adaptive algorithm was shown. While this approach leads to a drastic reduction of the complexity of the problem due to the iterative discovery of the sparsity of the solution, the problem size and structure is still rather limited. To allow for larger and more general problems, we exploit the tensor structure of the parametric problem by representing operator and solution iterates in the tensor train (TT) format. The (successive) compression carried out with these representations can be seen as a generalisation of some other model reduction techniques, e.g. the reduced basis method. We show that this approach facilitates the efficient computation of different error indicators related to the computational mesh, the active polynomial chaos index set, and the TT rank. In particular, the curse of dimension is avoided.

Full Text Available Visual structures in the environment are effortlessly segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. At this stage, highly articulated changes in shape boundary as well as very subtle curvature changes contribute to the perception of an object.We propose a recurrent computational network architecture that utilizes a hierarchical distributed representation of shape features to encode boundary features over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback from representations generated at higher stages. In so doing, global configurational as well as local information is available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. This combines separate findings about the generation of cortical shape representation using hierarchicalrepresentations with figure-ground segregation mechanisms.Our model is probed with a selection of artificial and real world images to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine

Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

Influential slot and resource models of visual working memory make the assumption that items are stored in memory as independent units, and that there are no interactions between them. Consequently, these models predict that the number of items to be remembered (the set size) is the primary determinant of working memory performance, and therefore these models quantify memory capacity in terms of the number and quality of individual items that can be stored. Here we demonstrate that there is substantial variance in display difficulty within a single set size, suggesting that limits based on the number of individual items alone cannot explain working memory storage. We asked hundreds of participants to remember the same sets of displays, and discovered that participants were highly consistent in terms of which items and displays were hardest or easiest to remember. Although a simple grouping or chunking strategy could not explain this individual-display variability, a model with multiple, interacting levels of representation could explain some of the display-by-display differences. Specifically, a model that includes a hierarchicalrepresentation of items plus the mean and variance of sets of the colors on the display successfully accounts for some of the variability across displays. We conclude that working memory representations are composed only in part of individual, independent object representations, and that a major factor in how many items are remembered on a particular display is interitem representations such as perceptual grouping, ensemble, and texture representations.

Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.

A reduced dynamics representation is introduced which is tailored to a hierarchical, Mori-chain type representation of a bath of harmonic oscillators which are linearly coupled to a subsystem. We consider a spin-boson system where a single effective mode is constructed so as to absorb all system-environment interactions, while the residual bath modes are coupled bilinearly to the primary mode and among each other. Using a cumulant expansion of the memory kernel, correlation functions for the primary mode are obtained, which can be suitably approximated by truncated chains representing the primary-residual mode interactions. A series of reduced-dimensional bath correlation functions is thus obtained, which can be expressed as Fourier-Laplace transforms of spectral densities that are given in truncated continued-fraction form. For a master equation which is second order in the system-bath coupling, the memory kernel is re-expressed in terms of local-in-time equations involving auxiliary densities and auxiliary operators.

In the traditional theories of irreversible thermodynamics and fluid mechanics, the specific volume and molar volume have been interchangeably used for pure fluids, but in this work we show that they should be distinguished from each other and given distinctive statistical mechanical representations. In this paper, we present a general formula for the statistical mechanical representation of molecular domain (volume or space) by using the Voronoi volume and its mean value that may be regarded as molar domain (volume) and also the statistical mechanical representation of volume flux. By using their statistical mechanical formulas, the evolution equations of volume transport are derived from the generalized Boltzmann equation of fluids. Approximate solutions of the evolution equations of volume transport provides kinetic theory formulas for the molecular domain, the constitutive equations for molar domain (volume) and volume flux, and the dissipation of energy associated with volume transport. Together with the constitutive equation for the mean velocity of the fluid obtained in a previous paper, the evolution equations for volume transport not only shed a fresh light on, and insight into, irreversible phenomena in fluids but also can be applied to study fluid flow problems in a manner hitherto unavailable in fluid dynamics and irreversible thermodynamics. Their roles in the generalized hydrodynamics will be considered in the sequel.

An efficient higher-order method of moments (MoM) solution of volume integral equations is presented. The higher-order MoM solution is based on higher-order hierarchical Legendre basis functions and higher-order geometry modeling. An unstructured mesh composed of 8-node trilinear and/or curved 27...... of magnitude in comparison to existing higher-order hierarchical basis functions. Consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement with the analytical Mie series solution for a dielectric sphere as well as with results obtained...

Simulation and reconstruction of events in high-energy experiments require the knowledge of the value of the magnetic field at any point within the detector. The way this information is extracted from the actual map of the magnetic field and served to simulation and reconstruction applications has a large impact on accuracy and performance in terms of speed. As an example, the CMS high level trigger performs on-line tracking of muons within the magnet yoke, where the field is discontinuous and largely inhomogeneous. In this case the high level trigger execution time is dominated by the time needed to access the magnetic field map.For this reason, an optimized approach for the access to the CMS field was developed, based on a dedicated representation of thedetector geometry. The detector is modeled in terms of volumes, constructed in such a way that their boundaries correspond to the fiel d discontinuities due to changes in the magnetic permeability of the materials. The field within each volume is therefore c...

The problem of electromagnetic scattering by composite metallic and dielectric objects is solved using the coupled volume-surface integral equation (VSIE). The method of moments (MoM) based on higher-order hierarchical Legendre basis functions and higher-order curvilinear geometrical elements...... with the analytical Mie series solution. Scattering by more complex metal-dielectric objects are also considered to compare the presented technique with other numerical methods....

The standard representations know as component trees, used in morphological connected attribute filtering and multi-scale analysis, are unsuitable for cases in which either the image itself, or the tree do not fit in the memory of a single compute node. Recently, a new structure has been developed

Full Text Available In geographic information systems, the reliability of querying, analysing, or reasoning results depends on the data quality. One central criterion of data quality is consistency, and identifying inconsistencies is crucial for maintaining the integrity of spatial data from multiple sources or at multiple resolutions. In traditional methods of consistency assessment, vector data are used as the primary experimental data. In this manuscript, we describe the use of a new type of raster data, tile maps, to access the consistency of information from multiscale representations of the water bodies that make up drainage systems. We describe a hierarchical methodology to determine the spatial consistency of tile-map datasets that display water areas in a raster format. Three characteristic indices, the degree of global feature consistency, the degree of local feature consistency, and the degree of overlap, are proposed to measure the consistency of multiscale representations of water areas. The perceptual hash algorithm and the scale-invariant feature transform (SIFT descriptor are applied to extract and measure the global and local features of water areas. By performing combined calculations using these three characteristic indices, the degrees of consistency of multiscale representations of water areas can be divided into five grades: exactly consistent, highly consistent, moderately consistent, less consistent, and inconsistent. For evaluation purposes, the proposed method is applied to several test areas from the Tiandi map of China. In addition, we identify key technologies that are related to the process of extracting water areas from a tile map. The accuracy of the consistency assessment method is evaluated, and our experimental results confirm that the proposed methodology is efficient and accurate.

A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

Full Text Available A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior

Attention selects objects or groups as the most fundamental unit, and this may be achieved through a process in which attention automatically spreads throughout their entire region. Previously, we found that a lateralized potential relative to an attended hemifield at occipito-temporal electrode sites reflects attention-spreading in response to connected bilateral stimuli [Kasai, T., & Kondo, M. Electrophysiological correlates of attention-spreading in visual grouping. NeuroReport, 18, 93-98, 2007]. The present study examined the nature of object representations by manipulating the extent of grouping through connectedness, while controlling the symmetrical structure of bilateral stimuli. The electrophysiological results of two experiments consistently indicated that attention was guided twice in association with perceptual grouping in the early phase (N1, 150-200 msec poststimulus) and with the unity of an object in the later phase (N2pc, 310/330-390 msec). This suggests that there are two processes in object-based spatial selection, and these are discussed with regard to their cognitive mechanisms and object representations.

Imaging large volumes such as entire cells or small model organisms at nanoscale resolution seemed an unrealistic, rather tedious task so far. Now, technical advances have lead to several electron microscopy (EM) large volume imaging techniques. One is array tomography, where ribbons of ultrathin serial sections are deposited on solid substrates like silicon wafers or glass coverslips. To ensure reliable retrieval of multiple ribbons from the boat of a diamond knife we introduce a substrate holder with 7 axes of translation or rotation specifically designed for that purpose. With this device we are able to deposit hundreds of sections in an ordered way in an area of 22 × 22 mm, the size of a coverslip. Imaging such arrays in a standard wide field fluorescence microscope produces reconstructions with 200 nm lateral resolution and 100 nm (the section thickness) resolution in z. By hierarchical imaging cascades in the scanning electron microscope (SEM), using a new software platform, we can address volumes from single cells to complete organs. In our first example, a cell population isolated from zebrafish spleen, we characterize different cell types according to their organelle inventory by segmenting 3D reconstructions of complete cells imaged with nanoscale resolution. In addition, by screening large numbers of cells at decreased resolution we can define the percentage at which different cell types are present in our preparation. With the second example, the root tip of cress, we illustrate how combining information from intermediate resolution data with high resolution data from selected regions of interest can drastically reduce the amount of data that has to be recorded. By imaging only the interesting parts of a sample considerably less data need to be stored, handled and eventually analysed. Our custom-designed substrate holder allows reproducible generation of section libraries, which can then be imaged in a hierarchical way. We demonstrate, that EM

The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumesrepresentations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel

The Voronoi volume of simple fluids was previously made use of in connection with volume transport phenomena in nonequilibrium simple fluids. To investigate volume transport phenomena, it is important to develop a method to compute the Voronoi volume of fluids in nonequilibrium. In this work, as a first step to this goal, we investigate the equilibrium limit of the nonequilibrium Voronoi volume together with its attendant related molar (molal) and specific volumes. It is proved that the equilibrium Voronoi volume is equivalent to the molar (molal) volume. The latter, in turn, is proved equivalent to the specific volume. This chain of equivalences provides an alternative procedure of computing the equilibrium Voronoi volume from the molar volume/specific volume. We also show approximate methods of computing the Voronoi and molar volumes from the information on the pair correlation function. These methods may be employed for their quick estimation, but also provide some aspects of the fluid structure and its relation to the Voronoi volume. The Voronoi volume obtained from computer simulations is fitted to a function of temperature and pressure in the region above the triple point but below the critical point. Since the fitting function is given in terms of reduced variables for the Lennard-Jones (LJ) model and the kindred volumes (i.e., specific and molar volumes) are in essence equivalent to the equation of state, the formula obtained is a reduced equation state for simple fluids obeying the LJ model potential in the range of temperature and pressure examined and hence can be used for other simple fluids.

...). The reason this is so is due to hierarchies that we take for granted. By hierarchies I mean that there is a layer of representation of us as individuals, as military professional, as members of a military unit and as citizens of an entire nation...

Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volumerepresentation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.

User authentication has been widely used by biometric applications that work on unique bodily features, such as fingerprints, retina scan, and palm vessels recognition. This paper proposes a novel concept of biometric authentication by exploiting a user's medical history. Although medical history may not be absolutely unique to every individual person, the chances of having two persons who share an exactly identical trail of medical and prognosis history are slim. Therefore, in addition to common biometric identification methods, medical history can be used as ingredients for generating Q&A challenges upon user authentication. This concept is motivated by a recent advancement on smart-card technology that future identity cards are able to carry patents' medical history like a mobile database. Privacy, however, may be a concern when medical history is used for authentication. Therefore in this paper, a new method is proposed for abstracting the medical data by using attribute value taxonomies, into a hierarchical data tree (h-Data). Questions can be abstracted to various level of resolution (hence sensitivity of private data) for use in the authentication process. The method is described and a case study is given in this paper.

Full Text Available User authentication has been widely used by biometric applications that work on unique bodily features, such as fingerprints, retina scan, and palm vessels recognition. This paper proposes a novel concept of biometric authentication by exploiting a user’s medical history. Although medical history may not be absolutely unique to every individual person, the chances of having two persons who share an exactly identical trail of medical and prognosis history are slim. Therefore, in addition to common biometric identification methods, medical history can be used as ingredients for generating Q&A challenges upon user authentication. This concept is motivated by a recent advancement on smart-card technology that future identity cards are able to carry patents’ medical history like a mobile database. Privacy, however, may be a concern when medical history is used for authentication. Therefore in this paper, a new method is proposed for abstracting the medical data by using attribute value taxonomies, into a hierarchical data tree (h-Data. Questions can be abstracted to various level of resolution (hence sensitivity of private data for use in the authentication process. The method is described and a case study is given in this paper.

Throughout development, working memory is subject to capacity limits that severely constrain short-term storage. However, adults can massively expand the total amount of remembered information by grouping items into "chunks". Although infants also have been shown to chunk objects in memory, little is known regarding the limits of this…

approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i

The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). The purpose of this coupled system of computer codes is to analyze environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil. This is accomplished by calculating radiation doses to individuals or populations. GENII is described in three volumes of documentation. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. The third volume is a Code Maintenance Manual for the user who requires knowledge of code detail. It includes code logic diagrams, global dictionary, worksheets, example hand calculations, and listings of the code and its associated data libraries. 72 refs., 15 figs., 34 tabs

The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). The purpose of this coupled system of computer codes is to analyze environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil. This is accomplished by calculating radiation doses to individuals or populations. GENII is described in three volumes of documentation. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. The third volume is a Code Maintenance Manual for the user who requires knowledge of code detail. It includes code logic diagrams, global dictionary, worksheets, example hand calculations, and listings of the code and its associated data libraries. 72 refs., 15 figs., 34 tabs.

We construct, for any finite dimension $n$, a new hidden measurement model for quantum mechanics based on representing quantum transition probabilities by the volume of regions in projective Hilbert space. For $n=2$ our model is equivalent to the Aerts sphere model and serves as a generalization of it for dimensions $n \\geq 3$. We also show how to construct a hidden variables scheme based on hidden measurements and we discuss how joint distributions arise in our hidden variables scheme and th...

Integrating 2 theoretical perspectives on predictor-criterion relationships, the present study developed and tested a hierarchical framework in which each five-factor model (FFM) personality trait comprises 2 DeYoung, Quilty, and Peterson (2007) facets, which in turn comprise 6 Costa and McCrae (1992) NEO facets. Both theoretical perspectives-the bandwidth-fidelity dilemma and construct correspondence-suggest that lower order traits would better predict facets of job performance (task performance and contextual performance). They differ, however, as to the relative merits of broad and narrow traits in predicting a broad criterion (overall job performance). We first meta-analyzed the relationship of the 30 NEO facets to overall job performance and its facets. Overall, 1,176 correlations from 410 independent samples (combined N = 406,029) were coded and meta-analyzed. We then formed the 10 DeYoung et al. facets from the NEO facets, and 5 broad traits from those facets. Overall, results provided support for the 6-2-1 framework in general and the importance of the NEO facets in particular. (c) 2013 APA, all rights reserved.

Full Text Available In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.

Photography not only represents space. Space is produced photographically. Since its inception in the 19th century, photography has brought to light a vast array of represented subjects. Always situated in some spatial order, photographic representations have been operatively underpinned by social...... to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...... possibilities, and genre distinctions. Presenting several distinct ways of producing space photographically, this book opens a new and important field of inquiry for photography research....

This third volume can be roughly divided into two parts. The first part is devoted to the investigation of various properties of projective characters. Special attention is drawn to spin representations and their character tables and to various correspondences for projective characters. Among other topics, projective Schur index and projective representations of abelian groups are covered. The last topic is investigated by introducing a symplectic geometry on finite abelian groups. The second part is devoted to Clifford theory for graded algebras and its application to the corresponding theory

Full Text Available The public security incidents were getting increasingly challenging with regard to their new features, including multi-scale mobility, multistage dynamic evolution, as well as spatiotemporal concurrency and uncertainty in the complex urban environment. However, the existing video models, which were used/designed for independent archive or local analysis of surveillance video, have seriously inhibited emergency response to the urgent requirements.Aiming at the explicit representation of change mechanism in video, the paper proposed a novel hierarchical geovideo semantic model using UML. This model was characterized by the hierarchicalrepresentation of both data structure and semantics based on the change-oriented three domains (feature domain, process domain and event domain instead of overall semantic description of video streaming; combining both geographical semantics and video content semantics, in support of global semantic association between multiple geovideo data. The public security incidents by video surveillance are inspected as an example to illustrate the validity of this model.

Hierarchical matrix approximations are a promising tool for approximating low-rank matrices given the compactness of their representation and the economy of the operations between them. Integral and differential operators have been the major

Abstract In this paper the mathematical logic behind a hierarchical planning procedure is discussed. The planning procedure is used to derive production volumes of consumer products. The essence of the planning procedure is that first a commitment is made concerning the production volume for a

This volume goes beyond the understanding of symmetries and exploits them in the study of the behavior of both classical and quantum physical systems. Thus it is important to study the symmetries described by continuous (Lie) groups of transformations. We then discuss how we get operators that form a Lie algebra. Of particular interest to physics is the representation of the elements of the algebra and the group in terms of matrices and, in particular, the irreducible representations. These representations can be identified with physical observables. This leads to the study of the classical Lie algebras, associated with unitary, unimodular, orthogonal and symplectic transformations. We also discuss some special algebras in some detail. The discussion proceeds along the lines of the Cartan-Weyl theory via the root vectors and root diagrams and, in particular, the Dynkin representation of the roots. Thus the representations are expressed in terms of weights, which are generated by the application of the elemen...

Coordinated views have proven critical to the development of effective visualization environments. This results from the fact that a single view or representation of the data cannot show all of the intricacies of a given data set. Additionally, users will often need to correlate more data parameters than can effectively be integrated into a single visual display. Typically, development of multiple-linked views results in an adhoc configuration of views and associated interactions. The hierarchical model we are proposing is geared towards more effective organization of such environments and the views they encompass. At the same time, this model can effectively integrate much of the prior work on interactive and visual frameworks. Additionally, we expand the concept of views to incorporate perceptual views. This is related to the fact that visual displays can have information encoded at various levels of focus. Thus, a global view of the display provides overall trends of the data while focusing in on individual elements provides detailed specifics. By integrating interaction and perception into a single model, we show how one impacts the other. Typically, interaction and perception are considered separately, however, when interaction is being considered at a fundamental level and allowed to direct/modify the visualization directly we must consider them simultaneously and how they impact one another.

The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.

The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.

The emergent mathematical philosophy of categorification is reshaping our view of modern mathematics by uncovering a hidden layer of structure in mathematics, revealing richer and more robust structures capable of describing more complex phenomena. Categorified representation theory, or higher representation theory, aims to understand a new level of structure present in representation theory. Rather than studying actions of algebras on vector spaces where algebra elements act by linear endomorphisms of the vector space, higher representation theory describes the structure present when algebras act on categories, with algebra elements acting by functors. The new level of structure in higher representation theory arises by studying the natural transformations between functors. This enhanced perspective brings into play a powerful new set of tools that deepens our understanding of traditional representation theory. This volume exhibits some of the current trends in higher representation theory and the diverse te...

This paper presents work on using hierarchical long term memory to reduce the memory requirements of nearest sequence memory (NSM) learning, a previously published, instance-based reinforcement learning algorithm. A hierarchical memory representation reduces the memory requirements by allowing traces to share common sub-sequences. We present moderated mechanisms for estimating discounted future rewards and for dealing with hidden state using hierarchical memory. We also present an experimental analysis of how the sub-sequence length affects the memory compression achieved and show that the reduced memory requirements do not effect the speed of learning. Finally, we analyse and discuss the persistence of the sub-sequences independent of specific trace instances.

, and dialogue, of situated participants. The article includes a lengthy example of a poetic representation of one participant’s story, and the author comments on the potentials of ‘doing’ poetic representations as an example of writing in ways that challenges what sometimes goes unasked in participative social...

The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

This paper deals with the design of a supervision system using a hierarchy of models formed by graphs, in which the variables are the nodes and the causal relations between the variables of the arcs. To obtain a representation of the variables evolutions which contains only the relevant features of their real evolutions, the causal relations are completed with qualitative transfer functions (QTFs) which produce roughly the behaviour of the classical transfer functions. Major improvements have been made in the building of the hierarchical organization. First, the basic variables of the uppermost level and the causal relations between them are chosen. The next graph is built by adding intermediary variables to the upper graph. When the undermost graph has been built, the transfer functions parameters corresponding to its causal relations are identified. The second task consists in the upwelling of the information from the undermost graph to the uppermost one. A fusion procedure of the causal relations has been designed to compute the QFTs relevant for each level. This procedure aims to reduce the number of parameters needed to represent an evolution at a high level of abstraction. These techniques have been applied to the hierarchical modelling of nuclear process. (authors). 8 refs., 12 figs

The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this research...... topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...... zeolites that have been reported hitherto. Prototypical examples from some of the different categories of catalytic reactions that have been studied using hierarchical zeolite catalysts are highlighted. This clearly illustrates the different ways that improved performance can be achieved with this family...

Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... with changing and increasing demands. Two-layer networks consist of one backbone network, which interconnects cluster networks. The clusters consist of nodes and links, which connect the nodes. One node in each cluster is a hub node, and the backbone interconnects the hub nodes of each cluster and thus...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks...

A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....

Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

This book is intended to serve as a textbook for a course in Representation Theory of Algebras at the beginning graduate level. The text has two parts. In Part I, the theory is studied in an elementary way using quivers and their representations. This is a very hands-on approach and requires only basic knowledge of linear algebra. The main tool for describing the representation theory of a finite-dimensional algebra is its Auslander-Reiten quiver, and the text introduces these quivers as early as possible. Part II then uses the language of algebras and modules to build on the material developed before. The equivalence of the two approaches is proved in the text. The last chapter gives a proof of Gabriel’s Theorem. The language of category theory is developed along the way as needed.

Stereotypic presumptions about gender affect the design process, both in relation to how users are understood and how products are designed. As a way to decrease the influence of stereotypic presumptions in design process, we propose not to disregard the aspect of gender in the design process......, as the perspective brings valuable insights on different approaches to technology, but instead to view gender through a value lens. Contributing to this perspective, we have developed Value Representations as a design-oriented instrument for staging a reflective dialogue with users. Value Representations...

Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

While Big Data technologies are transforming our ability to analyze ever larger volumes of Earth science data, practical constraints continue to limit our ability to compare data across datasets from different sources in an efficient and robust manner. Within a single data collection, invariants such as file format, grid type, and spatial resolution greatly simplify many types of analysis (often implicitly). However, when analysis combines data across multiple data collections, researchers are generally required to implement data transformations (i.e., "data preparation") to provide appropriate invariants. These transformation include changing of file formats, ingesting into a database, and/or regridding to a common spatial representation, and they can either be performed once, statically, or each time the data is accessed. At the very least, this process is inefficient from the perspective of the community as each team selects its own representation and privately implements the appropriate transformations. No doubt there are disadvantages to any "universal" representation, but we posit that major benefits would be obtained if a suitably flexible spatial representation could be standardized along with tools for transforming to/from that representation. We regard this as part of the historic trend in data publishing. Early datasets used ad hoc formats and lacked metadata. As better tools evolved, published data began to use standardized formats (e.g., HDF and netCDF) with attached metadata. We propose that the modern need to perform analysis across data sets should drive a new generation of tools that support a standardized spatial representation. More specifically, we propose the hierarchical triangular mesh (HTM) as a suitable "generic" resolution that permits standard transformations to/from native representations in use today, as well as tools to convert/regrid existing datasets onto that representation.

A hierarchical spatial framework for large-scale, long-term forest landscape planning is presented along with example policy analyses for a 560,000 ha area of the Oregon Coast Range. The modeling framework suggests utilizing the detail provided by satellite imagery to track forest vegetation condition and for representation of fine-scale features, such as riparian...

In this paper we address the problem of human activity modelling and recognition by means of a hierarchicalrepresentation of mined dense spatiotemporal features. At each level of the hierarchy, the proposed method selects feature constellations that are increasingly discriminative and

This volume is important because despite various external representations, such as analogies, metaphors, and visualizations being commonly used by physics teachers, educators and researchers, the notion of using the pedagogical functions of multiple representations to support teaching and learning is still a gap in physics education. The research presented in the three sections of the book is introduced by descriptions of various psychological theories that are applied in different ways for designing physics teaching and learning in classroom settings. The following chapters of the book illustrate teaching and learning with respect to applying specific physics multiple representations in different levels of the education system and in different physics topics using analogies and models, different modes, and in reasoning and representational competence. When multiple representations are used in physics for teaching, the expectation is that they should be successful. To ensure this is the case, the implementati...

Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.

This paper presents a new multi-resolution volumerepresentation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined

Contemporary communicational and informational processes contribute to the shaping of our physical environment by having a powerful influence on the process of design. Applications of virtual reality (VR) are transforming the way architecture is conceived and produced by introducing dynamic...... elements into the process of design. Through its immersive properties, virtual reality allows access to a spatial experience of a computer model very different to both screen based simulations as well as traditional forms of architectural representation. The dissertation focuses on processes of the current...... representation? How is virtual reality used in public participation and how do virtual environments affect participatory decision making? How does VR thus affect the physical world of built environment? Given the practical collaborative possibilities of immersive technology, how can they best be implemented...

Despite the presence of shared characteristics across the different domains modulating Broca's area activity (e.g., structural analogies, as between language and music, or representational homologies, as between action execution and action observation), the question of what exactly the common denominator of such diverse brain functions is, with respect to the function of Broca's area, remains largely a debated issue. Here, we suggest that an important computational role of Broca's area may be to process hierarchical structures in a wide range of functional domains.

We introduce a double arm with 4-finger's manipulator system which process the large volume of information at high speed. This is under research/development many type of works in the harsh condition. Namely, hierarchization of instruction unit in which motion control system as real time processing unit, and task planning unit as non-real time processing unit, interface with operation through the task planning unit has been made. Also, high speed processing of large volume information has been realized by decentralizing the motion control unit by function, hierarchizing the high speed processing unit, and developing high speed transmission, IC which does not depend on computer OS to avoid the delay in transmission. (author)

We study the hierarchical wave functions on a sphere and on a torus. We simplify some wave functions on a sphere or a torus using the analytic properties of wave functions. The open question, the construction of the wave function for quasi electron excitation on a torus, is also solved in this paper. (author)

Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.

Full Text Available This paper is focused on the hierarchical perspective, one of the methods for representing space that was used before the discovery of the Renaissance linear perspective. The hierarchical perspective has a more or less pronounced scientific character and its study offers us a clear image of the way the representatives of the cultures that developed it used to perceive the sensitive reality. This type of perspective is an original method of representing three-dimensional space on a flat surface, which characterises the art of Ancient Egypt and much of the art of the Middle Ages, being identified in the Eastern European Byzantine art, as well as in the Western European Pre-Romanesque and Romanesque art. At the same time, the hierarchical perspective is also present in naive painting and infantile drawing. Reminiscences of this method can be recognised also in the works of some precursors of the Italian Renaissance. The hierarchical perspective can be viewed as a subjective ranking criterion, according to which the elements are visually represented by taking into account their relevance within the image while perception is ignored. This paper aims to show how the main objective of the artists of those times was not to faithfully represent the objective reality, but rather to emphasize the essence of the world and its perennial aspects. This may represent a possible explanation for the refusal of perspective in the Egyptian, Romanesque and Byzantine painting, characterised by a marked two-dimensionality.

Four experiments investigated the classic issue in semantic memory of whether people organize categorical information in hierarchies and use inference to retrieve information from them, as proposed by Collins and Quillian (1969). Past evidence has focused on RT to confirm sentences such as "All birds are animals" or "Canaries breathe." However,…

PDE with stochastic data usually lead to very high-dimensional algebraic problems which easily become unfeasible for numerical computations because of the dense coupling structure of the discretised stochastic operator. Recently, an adaptive

Human activity recognition is an essential task for robots to effectively and efficiently interact with the end users. Many machine learning approaches for activity recognition systems have been proposed recently. Most of these methods are built upon a strong assumption that the labels in the

A filter for water purification that is very thin, with small interstices and high surface area per unit mass, can be made with nanofibers. The mechanical strength of a very thin sheet of nanofibers is not great enough to withstand the pressure drop of the fluid flowing through. If the sheet of nanofibers is made thicker, the strength will increase, but the flow will be reduced to an impractical level. An optimized filter can be made with nanometer scale structures supported on micron scale structures, which are in turn supported on millimeter scale structures. This leads to a durable hierarchical structure to optimize the filtration efficiency with a minimum amount of material. Buckling coils,ootnotetextTao Han, Darrell H Reneker, Alexander L. Yarin, Polymer, Volume 48, issue 20 (September 21, 2007), p. 6064-6076. electrical bending coilsootnotetextDarrell H. Reneker and Alexander L. Yarin, Polymer, Volume 49, Issue 10 (2008) Pages 2387-2425, DOI:10.1016/j.polymer.2008.02.002. Feature Article. and pendulum coilsootnotetextT. Han, D.H. Reneker, A.L. Yarin, Polymer, Volume 49, (2008) Pages 2160-2169, doi:10.1016/jpolymer.2008.01.0487878. spanning dimensions from a few microns to a few centimeters can be collected from a single jet by controlling the position and motion of a collector. Attractive routes to the design and construction of hierarchical structures for filtration are based on nanofibers supported on small coils that are in turn supported on larger coils, which are supported on even larger overlapping coils. ``Such top-down'' hierarchical structures are easy to make by electrospinning. In one example, a thin hierarchical structure was made, with a high surface area and small interstices, having an open area of over 50%, with the thinnest fibers supported at least every 15 microns.

The sheer volume of biomedical research threatens to overwhelm the capacity of individuals to effectively process this information. Adding to this challenge is the multiscale nature of both biological systems and the research community as a whole. Given this volume and rate of generation of biomedical information, the research community must develop methods for robust representation of knowledge in order for individuals, and the community as a whole, to "know what they know." Despite increasing emphasis on "data-driven" research, the fact remains that researchers guide their research using intuitively constructed conceptual models derived from knowledge extracted from publications, knowledge that is generally qualitatively expressed using natural language. Agent-based modeling (ABM) is a computational modeling method that is suited to translating the knowledge expressed in biomedical texts into dynamic representations of the conceptual models generated by researchers. The hierarchical object-class orientation of ABM maps well to biomedical ontological structures, facilitating the translation of ontologies into instantiated models. Furthermore, ABM is suited to producing the nonintuitive behaviors that often "break" conceptual models. Verification in this context is focused at determining the plausibility of a particular conceptual model, and qualitative knowledge representation is often sufficient for this goal. Thus, utilized in this fashion, ABM can provide a powerful adjunct to other computational methods within the research process, as well as providing a metamodeling framework to enhance the evolution of biomedical ontologies.

Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.

Representational momentum, the tendency for memory to be distorted in the direction of an implied transformation, suggests that dynamics are an intrinsic part of perceptual representations. We examined the effect of attention on dynamic representation by testing for representational momentum under conditions of distraction. Forward memory shifts increase when attention is divided. Attention may be involved in halting but not in maintaining dynamic representations.

Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.

Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose...... a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

Full Text Available This paper aims to develop a hierarchical risk assessment model using the newly-developed evidential reasoning (ER rule, which constitutes a generic conjunctive probabilistic reasoning process. In this paper, we first provide a brief introduction to the basics of the ER rule and emphasize the strengths for representing and aggregating uncertain information from multiple experts and sources. Further, we discuss the key steps of developing the hierarchical risk assessment framework systematically, including (1 formulation of risk assessment hierarchy; (2 representation of both qualitative and quantitative information; (3 elicitation of attribute weights and information reliabilities; (4 aggregation of assessment information using the ER rule and (5 quantification and ranking of risks using utility-based transformation. The proposed hierarchical risk assessment framework can potentially be implemented to various complex and uncertain systems. A case study on the fire/explosion risk assessment of marine vessels demonstrates the applicability of the proposed risk assessment model.

I define a set of conditions that the most general hierarchical Yukawa mass matrices have to satisfy so that the leading rotations in the diagonalization matrix are a pair of (2,3) and (1,2) rotations. In addition to Fritzsch structures, examples of such hierarchical structures include also matrices with (1,3) elements of the same order or even much larger than the (1,2) elements. Such matrices can be obtained in the framework of a flavor theory. To leading order, the values of the angle in the (2,3) plane (s 23 ) and the angle in the (1,2) plane (s 12 ) do not depend on the order in which they are taken when diagonalizing. We find that any of the Cabbibo-Kobayashi-Maskawa matrix parametrizations that consist of at least one (1,2) and one (2,3) rotation may be suitable. In the particular case when the s 13 diagonalization angles are sufficiently small compared to the product s 12 s 23 , two special CKM parametrizations emerge: the R 12 R 23 R 12 parametrization follows with s 23 taken before the s 12 rotation, and vice versa for the R 23 R 12 R 23 parametrization. (author)

Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters

The class of Archimax copulas is generalized to nested and hierarchical Archimax copulas in several ways. First, nested extreme-value copulas or nested stable tail dependence functions are introduced to construct nested Archimax copulas based on a single frailty variable. Second, a hierarchical construction of d-norm generators is presented to construct hierarchical stable tail dependence functions and thus hierarchical extreme-value copulas. Moreover, one can, by itself or additionally, introduce nested frailties to extend Archimax copulas to nested Archimax copulas in a similar way as nested Archimedean copulas extend Archimedean copulas. Further results include a general formula for the density of Archimax copulas.

The class of Archimax copulas is generalized to nested and hierarchical Archimax copulas in several ways. First, nested extreme-value copulas or nested stable tail dependence functions are introduced to construct nested Archimax copulas based on a single frailty variable. Second, a hierarchical construction of d-norm generators is presented to construct hierarchical stable tail dependence functions and thus hierarchical extreme-value copulas. Moreover, one can, by itself or additionally, introduce nested frailties to extend Archimax copulas to nested Archimax copulas in a similar way as nested Archimedean copulas extend Archimedean copulas. Further results include a general formula for the density of Archimax copulas.

It is important to monitor feedback related to the intended result of an action while executing that action. This monitoring process occurs hierarchically; that is, sensorimotor processing occurs at a lower level, and conceptual representation of action goals occurs at a higher level. Although the hierarchical nature of self-monitoring may derive…

Full Text Available The paper deals with the idea of a research method for hierarchical multilayer routing systems. The method represents a composition of methods of graph theories, reliability, probabilities, etc. These methods are applied to the solution of different private analysis and optimization tasks and are systemically connected and coordinated with each other through uniform set-theoretic representation of the object of research. The hierarchical multilayer routing systems are considered as infrastructure facilities (gas and oil pipelines, automobile and railway networks, systems of power supply and communication with distribution of material resources, energy or information with the use of hierarchically nested functions of routing. For descriptive reasons theoretical constructions are considered on the example of task solution of probability determination for up state of specific infocommunication system. The author showed the possibility of constructive combination of graph representation of structure of the object of research and a logic probable analysis method of its reliability indices through uniform set-theoretic representation of its elements and processes proceeding in them.

This paper describes a system developed to create computer based jazz improvisation solos. The generation of the improvisation material uses interactive evolution, based on a dual genetic representation: a basic melody line representation, with energy constraints ("rubber band") and a hierarchic...... developed for this specific type of music. This is the first published part of an ongoing research project in generative jazz, based on probabilistic and evolutionary strategies....

Full Text Available In this review, the dielectric permittivity of dielectric mixtures is discussed in view of the spectral density representation method. A distinct representation is derived for predicting the dielectric properties, permittivities ε, of mixtures. The presentation of the dielectric properties is based on a scaled permittivity approach, ξ = (εe − εm(εi − εm−1, where the subscripts e, m and i denote the dielectric permittivities of the effective, matrix and inclusion media, respectively [Tuncer, E. J. Phys.: Condens. Matter 2005, 17, L125]. This novel representation transforms the spectral density formalism to a form similar to the distribution of relaxation times method of dielectric relaxation. Consequently, I propose that any dielectric relaxation formula, i.e., the Havriliak-Negami empirical dielectric relaxation expression, can be adopted as a scaled permittivity. The presented scaled permittivity representation has potential to be improved and implemented into the existing data analyzing routines for dielectric relaxation; however, the information to extract would be the topological/morphological description in mixtures. To arrive at the description, one needs to know the dielectric properties of the constituents and the composite prior to the spectral analysis. To illustrate the strength of the representation and confirm the proposed hypothesis, the Landau-Lifshitz/Looyenga (LLL [Looyenga, H. Physica 1965, 31, 401] expression is selected. The structural information of a mixture obeying LLL is extracted for different volume fractions of phases. Both an in-house computational tool based on the Monte Carlo method to solve inverse integral transforms and the proposed empirical scaled permittivity expression are employed to estimate the spectral density function of the LLL expression. The estimated spectral functions for mixtures with different inclusion concentration compositions show similarities; they are composed of a couple of bell

The development of large-scale ecological models depends implicitly on a concept known as hierarchy theory which views biological systems in a series of hierarchical levels (i.e., organism, population, trophic level, ecosystem). The theory states that an explanation of a biological phenomenon is provided when it is shown to be the consequence of the activities of the system's components, which are themselves systems in the next lower level of the hierarchy. Thus, the behavior of a population is explained by the behavior of the organisms in the population. The initial step in any modeling project is, therefore, to identify the system components and the interactions between them. A series of examples of transmutations in aquatic and terrestrial ecosystems are presented to show how and why changes occur. The types of changes are summarized and possible implications of transmutation for hierarchy theory, for the modeler, and for the ecological theoretician are discussed

The "raison d'etre" of hierarchical dustering theory stems from one basic phe­ nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das­ sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.

Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

A software package is presented that can be employed for any 3D imaging modalities: X-ray tomography, emission tomography, magnetic resonance imaging. This system uses a hierarchical data structure, named Octree, that naturally allows a multi-resolution approach. The well-known problems of such an indeterministic representation, especially the neighbor finding, has been solved. Several algorithms of volume processing have been developed, using these techniques and an optimal data storage for the Octree. A parallel implementation was chosen that is compatible with the constraints of the Octree base and the various algorithms. (authors) 4 refs., 3 figs., 1 tab

The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees

. The hierarchical nature of the bounding volume structure complicates an efficient implementation on massively parallel architectures such as modern graphics cards and we therefore propose a hybrid method where only box and triangle overlap tests and transformations are offloaded to the graphics card. When...

It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157

A Hilbert space in M dimensions is shown explicitly to accommodate representations that reflect the decomposition of M into prime numbers. Representations that exhibit the factorization of M into two relatively prime numbers: the kq representation (Zak J 1970 Phys. Today 23 51), and related representations termed q 1 q 2 representations (together with their conjugates) are analysed, as well as a representation that exhibits the complete factorization of M. In this latter representation each quantum number varies in a subspace that is associated with one of the prime numbers that make up M

Full Text Available The Internet of Things (IoT generates lots of high-dimensional sensor intelligent data. The processing of high-dimensional data (e.g., data visualization and data classification is very difficult, so it requires excellent subspace learning algorithms to learn a latent subspace to preserve the intrinsic structure of the high-dimensional data, and abandon the least useful information in the subsequent processing. In this context, many subspace learning algorithms have been presented. However, in the process of transforming the high-dimensional data into the low-dimensional space, the huge difference between the sum of inter-class distance and the sum of intra-class distance for distinct data may cause a bias problem. That means that the impact of intra-class distance is overwhelmed. To address this problem, we propose a novel algorithm called Hierarchical Discriminant Analysis (HDA. It minimizes the sum of intra-class distance first, and then maximizes the sum of inter-class distance. This proposed method balances the bias from the inter-class and that from the intra-class to achieve better performance. Extensive experiments are conducted on several benchmark face datasets. The results reveal that HDA obtains better performance than other dimensionality reduction algorithms.

Elucidation of metabolic networks for an increasing number of organisms reveals that even small networks can contain thousands of reactions and chemical species. The intimate connectivity between components complicates their decomposition into biologically meaningful sub-networks. Moreover, traditional higher-order representations of metabolic networks as metabolic pathways, suffers from the lack of rigorous definition, yielding pathways of disparate content and size. We introduce a hierarchicalrepresentation that emphasizes the gross organization of metabolic networks in largely independent pathways and sub-systems at several levels of independence. The approach highlights the coupling of different pathways and the shared compounds responsible for those couplings. By assessing our results on Escherichia coli (E.coli metabolic reactions, Genetic Circuits Research Group, University of California, San Diego, http://gcrg.ucsd.edu/organisms/ecoli.html, 'model v 1.01. reactions') against accepted biochemical annotations, we provide the first systematic synopsis of an organism's metabolism. Comparison with operons of E.coli shows that low-level clusters are reflected in genome organization and gene regulation. Source code, data sets and supplementary information are available at http://www.mas.ecp.fr/labo/equipe/gagneur/hierarchy/hierarchy.html

The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.

Hierarchical design draws inspiration from analysis of biological materials and has opened new possibilities for enhancing performance and enabling new functionalities and extraordinary properties. With the development of nanotechnology, the necessary technological requirements for the manufactur...

Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our

This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with "Object Graph Models" (OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchicalrepresentation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions.

Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm. (paper)

Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

An extensive program of research in the past 2 decades has focused on the role of modal sensory, motor, and affective brain systems in storing and retrieving concept knowledge. This focus has led in some circles to an underestimation of the need for more abstract, supramodal conceptual representations in semantic cognition. Evidence for supramodal processing comes from neuroimaging work documenting a large, well-defined cortical network that responds to meaningful stimuli regardless of modal content. The nodes in this network correspond to high-level "convergence zones" that receive broadly crossmodal input and presumably process crossmodal conjunctions. It is proposed that highly conjunctive representations are needed for several critical functions, including capturing conceptual similarity structure, enabling thematic associative relationships independent of conceptual similarity, and providing efficient "chunking" of concept representations for a range of higher order tasks that require concepts to be configured as situations. These hypothesized functions account for a wide range of neuroimaging results showing modulation of the supramodal convergence zone network by associative strength, lexicality, familiarity, imageability, frequency, and semantic compositionality. The evidence supports a hierarchical model of knowledge representation in which modal systems provide a mechanism for concept acquisition and serve to ground individual concepts in external reality, whereas broadly conjunctive, supramodal representations play an equally important role in concept association and situation knowledge.

The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchicalrepresentation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.

Previous neuroimaging studies have demonstrated a hierarchical functional structure of the frontal cortices of the human brain, but the temporal course and the electrophysiological signature of the hierarchicalrepresentation remains unaddressed. In the present study, twenty-one volunteers were asked to perform a nested cue-target task, while their scalp potentials were recorded. The results showed that: (1) in comparison with the lower-level hierarchical targets, the higher-level targets elicited a larger N2 component (220–350 ms) at the frontal sites, and a smaller P3 component (350–500 ms) across the frontal and parietal sites; (2) conflict-related negativity (non-target minus target) was greater for the lower-level hierarchy than the higher-level, reflecting a more intensive process of conflict monitoring at the final step of target detection. These results imply that decision making, context updating, and conflict monitoring differ among different hierarchical levels of abstraction. PMID:27561989

This paper reviews work on the representation of knowledge from within psychology and artificial intelligence. The work covers the nature of representation, the distinction between the represented world and the representing world, and significant issues concerned with propositional, analogical, and superpositional representations. Specific topics…

Full Text Available This paper proposes a novel XML-based system for retrieval of presentation slides to address the growing data mining needs in presentation archives for educational and scholarly settings. In particular, contextual information, such as structural and formatting features, is extracted from the open format XML representation of presentation slides. In response to a textual user query, each extracted feature is used to compute a fuzzy relevance score for each slide in the database. The fuzzy scores from the various features are then combined through a hierarchical scheme to generate a single relevance score per slide. Various fusion operators and their properties are examined with respect to their effect on retrieval performance. Experimental results indicate a significant increase in retrieval performance measured in terms of precision-recall. The improvements are attributed to both the incorporation of the contextual features and the hierarchical feature combination scheme.

Hierarchical matrix approximations are a promising tool for approximating low-rank matrices given the compactness of their representation and the economy of the operations between them. Integral and differential operators have been the major applications of this technology, but they can be applied into other areas where low-rank properties exist. Such is the case of the Block Cyclic Reduction algorithm, which is used as a direct solver for the constant-coefficient Poisson quation. We explore the variable-coefficient case, also using Block Cyclic reduction, with the addition of Hierarchical Matrices to represent matrix blocks, hence improving the otherwise O(N2) algorithm, into an efficient O(N) algorithm.

Volume Averaging Technique (VAT) is employed in order to model the heat exchanger cross-flow as a porous media flow. As the averaging of the transport equations lead to a closure problem, separate relations are introduced to model interphase momentum and heat transfer between fluid flow and the solid structure. The hierarchic modeling is used to calculate the local drag coefficient C d as a function of Reynolds number Re h . For that purpose a separate model of REV is built and DNS of flow through REV is performed. The local values of heat transfer coefficient h are obtained from available literature. The geometry of the simulation domain and boundary conditions follow the geometry of the experimental test section used at U.C.L.A. The calculated temperature fields reveal that the geometry with denser pin-fins arrangement (HX1) heats fluid flow faster. The temperature field in the HX2 exhibits the formation of thermal boundary layer between pin-fins, which has a significant role in overall thermal performance of the heat exchanger. Although presented discrepancies of the whole-section drag coefficient C d are large, we believe that hierarchic modeling is an appropriate strategy for calculation of complex transport phenomena in heat exchanger geometries.(author)

The 'social brain hypothesis' for the evolution of large brains in primates has led to evidence for the coevolution of neocortical size and social group sizes, suggesting that there is a cognitive constraint on group size that depends, in some way, on the volume of neural material available for processing and synthesizing information on social relationships. More recently, work on both human and non-human primates has suggested that social groups are often hierarchically structured. We combine data on human grouping patterns in a comprehensive and systematic study. Using fractal analysis, we identify, with high statistical confidence, a discrete hierarchy of group sizes with a preferred scaling ratio close to three: rather than a single or a continuous spectrum of group sizes, humans spontaneously form groups of preferred sizes organized in a geometrical series approximating 3-5, 9-15, 30-45, etc. Such discrete scale invariance could be related to that identified in signatures of herding behaviour in financial markets and might reflect a hierarchical processing of social nearness by human brains.

Full Text Available Materials in biology span all the scales from Angstroms to meters and typically consist of complex hierarchical assemblies of simple building blocks. Here we describe an application of category theory to describe structural and resulting functional properties of biological protein materials by developing so-called ologs. An olog is like a "concept web" or "semantic network" except that it follows a rigorous mathematical formulation based on category theory. This key difference ensures that an olog is unambiguous, highly adaptable to evolution and change, and suitable for sharing concepts with other olog. We consider simple cases of beta-helical and amyloid-like protein filaments subjected to axial extension and develop an olog representation of their structural and resulting mechanical properties. We also construct a representation of a social network in which people send text-messages to their nearest neighbors and act as a team to perform a task. We show that the olog for the protein and the olog for the social network feature identical category-theoretic representations, and we proceed to precisely explicate the analogy or isomorphism between them. The examples presented here demonstrate that the intrinsic nature of a complex system, which in particular includes a precise relationship between structure and function at different hierarchical levels, can be effectively represented by an olog. This, in turn, allows for comparative studies between disparate materials or fields of application, and results in novel approaches to derive functionality in the design of de novo hierarchical systems. We discuss opportunities and challenges associated with the description of complex biological materials by using ologs as a powerful tool for analysis and design in the context of materiomics, and we present the potential impact of this approach for engineering, life sciences, and medicine.

Full Text Available Objective: To learn the social representations of ergonomic risk prepared ​​by dental students. Methodology: This exploratory study, subsidized the Theory of Social Representations, with 64 dental students of an educational institution, by means of interviews. The data were processed in Alceste4.8 and lexical analysis done by the descending hierarchical classification. Results: In two categories: knowledge about exposure to ergonomic risk end attitude of students on preventing and treating injuries caused by repetitive motion. For students, the ergonomic risk is related to the attitude in the dental office. Conclusion: Prevention of ergonomic risk for dental students has not been incorporated as a set of necessary measures for their health and the patients, to prevent ergonomic hazards that can result in harm to the patient caused by work-related musculoskeletal disorder, which is reflected in a lower quality practice.

reveals that deliberate change is indeed achievable in a non-hierarchical collaborative OSS community context. However, it presupposes the presence and active involvement of informal change agents. The paper identifies and specifies four key drivers for change agents’ influence. Originality....../value The findings contribute to organisational analysis by providing a deeper understanding of the importance of leadership in making deliberate change possible in non-hierarchical settings. It points to the importance of “change-by-conviction”, essentially based on voluntary behaviour. This can open the door...

Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

Reverse inference, or 'brain reading', is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data, based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain activity. Reverse inference takes into account the multivariate information between voxels and is currently the only way to assess how precisely some cognitive information is encoded by the activity of neural populations within the whole brain. However, it relies on a prediction function that is plagued by the curse of dimensionality, since there are far more features than samples, i.e., more voxels than fMRI volumes. To address this problem, different methods have been proposed, such as, among others, univariate feature selection, feature agglomeration and regularization techniques. In this paper, we consider a sparse hierarchical structured regularization. Specifically, the penalization we use is constructed from a tree that is obtained by spatially-constrained agglomerative clustering. This approach encodes the spatial structure of the data at different scales into the regularization, which makes the overall prediction procedure more robust to inter-subject variability. The regularization used induces the selection of spatially coherent predictive brain regions simultaneously at different scales. We test our algorithm on real data acquired to study the mental representation of objects, and we show that the proposed algorithm not only delineates meaningful brain regions but yields as well better prediction accuracy than reference methods. (authors)

Representing computer applications and their use is an important aspect of design. In various ways, designers need to externalize design proposals and present them to other designers, users, or managers. This article deals with understanding design representations and the work they do in design....... The article is based on a series of theoretical concepts coming out of studies of scientific and other work practices and on practical experiences from design of computer applications. The article presents alternatives to the ideas that design representations are mappings of present or future work situations...... and computer applications. It suggests that representations are primarily containers of ideas and that representation is situated at the same time as representations are crossing boundaries between various design and use activities. As such, representations should be carriers of their own contexts regarding...

Several networks occurring in real life have modular structures that are arranged in a hierarchical fashion. In this paper, we have proposed a model for such networks, using a stochastic generation method. Using this model we show that, the scaling relation between the clustering and degree of the nodes is not a necessary ...

Although there has been substantial research examining the effects of microaggressions in the public sphere, there has been little research that examines microaggressions in the workplace. This study explores the types of microaggressions that affect employees at universities. We coin the term "hierarchical microaggression" to represent…

Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

The present invention is a structure, method of making and method of use for a novel macroscopic hierarchically structured, nitrogen-doped, nano-porous carbon membrane (HNDCMs) with asymmetric and hierarchical pore architecture that can be produced

A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles...

Very roughly speaking, representation theory studies symmetry in linear spaces. It is a beautiful mathematical subject which has many applications, ranging from number theory and combinatorics to geometry, probability theory, quantum mechanics, and quantum field theory. The goal of this book is to give a "holistic" introduction to representation theory, presenting it as a unified subject which studies representations of associative algebras and treating the representation theories of groups, Lie algebras, and quivers as special cases. Using this approach, the book covers a number of standard topics in the representation theories of these structures. Theoretical material in the book is supplemented by many problems and exercises which touch upon a lot of additional topics; the more difficult exercises are provided with hints. The book is designed as a textbook for advanced undergraduate and beginning graduate students. It should be accessible to students with a strong background in linear algebra and a basic k...

Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations

This book addresses a broad spectrum of areas in both hybrid materials and hierarchical composites, including recent development of processing technologies, structural designs, modern computer simulation techniques, and the relationships between the processing-structure-property-performance. Each topic is introduced at length with numerous and detailed examples and over 150 illustrations. In addition, the authors present a method of categorizing these materials, so that representative examples of all material classes are discussed.

Multi-level structure of urban space, multitude of subjects of its transformation, which follow asymmetric interests, multilevel system of institutions which regulate interaction in the "population business government -public organizations" system, determine the use of hierarchic approach to the analysis of urban space. The article observes theoretical justification of using this approach to study correlations and peculiarities of interaction in urban space as in an intricately organized syst...

Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

We present a concept for using procedural techniques to represent media. Procedural methods allow us to represent digital media (2D images, 3D environments etc.) with very little information and to render it photo realistically. Since not all kind of content can be created procedurally, traditional media representations (bitmaps, polygons etc.) must be used as well. We have adopted an object-based media representation where an object can be represented either with a procedure or with its trad...

A new set of scalar and vector tetrahedral finite elements are presented. The elements are hierarchal, allowing mixing of polynomial orders; scalar orders up to 3 and vector orders up to 2 are defined. The vector elements impose tangential continuity on the field but not normal continuity, making them suitable for representing the vector electric or magnetic field. Further, the scalar and vector elements are such that they can easily be used in the same mesh, a requirement of many quasi-static formulations. Results are presented for two 50 Hz problems: the Bath Cube, and TEAM Problem 7

Routine actions are commonly assumed to be controlled by hierarchically organized processes and representations. In the domain of typing theories, word-level information is assumed to activate the constituent keystrokes required to type each letter in a word. We tested this assumption directly using a novel single-letter probe technique. Subjects…

The use of computers is less and less restricted to numerical and data processing. On the other hand, current software mostly contains algorithms on universes with complete information. The paper discusses a different family of programs: expert systems are designed as aids in human reasoning in various specific areas. Symbolic knowledge manipulation, uncertain and incomplete deduction capabilities, natural communication with humans in non-procedural ways are their essential features. This part is mainly a reflection and a debate about the various modes of acquisition and representation of human knowledge. 32 references.

Nature provides us with many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture. Although a number of methods have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated and natural graphs extracted from digitized images of dicotyledonous leaves and animal vasculature. We calculate various metrics on the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

River corridors exhibit landforms nested within landforms repeatedly down spatial scales. In this study we developed, tested, and implemented a new way to create river classifications by mapping domains of fluvial processes with respect to the hierarchical organization of topographic complexity that drives fluvial dynamism. We tested this approach on flow convergence routing, a morphodynamic mechanism with different states depending on the structure of nondimensional topographic variability. Five nondimensional landform types with unique functionality (nozzle, wide bar, normal channel, constricted pool, and oversized) represent this process at any flow. When this typology is nested at base flow, bankfull, and floodprone scales it creates a system with up to 125 functional types. This shows how a single mechanism produces complex dynamism via nesting. Given the classification, we answered nine specific scientific questions to investigate the abundance, sequencing, and hierarchical nesting of these new landform types using a 35-km gravel/cobble river segment of the Yuba River in California. The nested structure of flow convergence routing landforms found in this study revealed that bankfull landforms are nested within specific floodprone valley landform types, and these types control bankfull morphodynamics during moderate to large floods. As a result, this study calls into question the prevailing theory that the bankfull channel of a gravel/cobble river is controlled by in-channel, bankfull, and/or small flood flows. Such flows are too small to initiate widespread sediment transport in a gravel/cobble river with topographic complexity.

The structure of interactions in most animal and human societies can be best represented by complex hierarchical networks. In order to maintain close-to-optimal function both stability and adaptability are necessary. Here we investigate the stability of hierarchical networks that emerge from the simulations of an organization type with an efficiency function reminiscent of the Hamiltonian of spin glasses. Using this quantitative approach we find a number of expected (from everyday observations) and highly non-trivial results for the obtained locally optimal networks, including, for example: (i) stability increases with growing efficiency and level of hierarchy; (ii) the same perturbation results in a larger change for more efficient states; (iii) networks with a lower level of hierarchy become more efficient after perturbation; (iv) due to the huge number of possible optimal states only a small fraction of them exhibit resilience and, finally, (v) ‘attacks’ targeting the nodes selectively (regarding their position in the hierarchy) can result in paradoxical outcomes.

Intelligent (or smart) materials are increasingly becoming key materials for use in actuators and sensors. If an intelligent material is used as a sensor, it can be embedded in a variety of structure functioning as a health monitoring system to make their life longer with high reliability. If an intelligent material is used as an active material in an actuator, it plays a key role of making dynamic movement of the actuator under a set of stimuli. This talk intends to cover two different active materials in actuators, (1) piezoelectric laminate with FGM microstructure, (2) ferromagnetic shape memory alloy (FSMA). The advantage of using the FGM piezo laminate is to enhance its fatigue life while maintaining large bending displacement, while that of use in FSMA is its fast actuation while providing a large force and stroke capability. Use of hierarchical modeling of the above active materials is a key design step in optimizing its microstructure for enhancement of their performance. I will discuss briefly hierarchical modeling of the above two active materials. For FGM piezo laminate, we will use both micromechanical model and laminate theory, while for FSMA, the modeling interfacing nano-structure, microstructure and macro-behavior is discussed. (author)

This volume explains how the recent advances in wavelet analysis provide new means for multiresolution analysis and describes its wide array of powerful tools. The book covers variations of the windowed Fourier transform, constructions of special waveforms suitable for specific tasks, the use of redundant representations in reconstruction and enhancement, applications of efficient numerical compression as a tool for fast numerical analysis, and approximation properties of various waveforms in different contexts.

An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In this range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show...

of the properties of the operator T requires more work. For example it is a delicate issue to obtain a representation with a bounded operator, and the availability of such a representation not only depends on the frame considered as a set, but also on the chosen indexing. Using results from operator theory we show......The purpose of this paper is to consider representations of frames {fk}k∈I in a Hilbert space ℋ of the form {fk}k∈I = {Tkf0}k∈I for a linear operator T; here the index set I is either ℤ or ℒ0. While a representation of this form is available under weak conditions on the frame, the analysis...... that by embedding the Hilbert space ℋ into a larger Hilbert space, we can always represent a frame via iterations of a bounded operator, composed with the orthogonal projection onto ℋ. The paper closes with a discussion of an open problem concerning representations of Gabor frames via iterations of a bounded...

School is one of the key settings for health education (HE). The objectives of this study are to assess primary school teachers' self-reported teaching practices in HE and to describe their representation concerning their role in HE. A quantitative study was conducted on a sample of primary school teachers (n = 626) in two French regions in order to analyze their practices and representations in HE. A hierarchical clustering dendogram was performed on questions exploring representations of HE. Multiple linear regression analysis helped explain the motivation and self-perceived competency score. Three quarters of the teachers declare they work in HE. Only one third of them declare they work in a comprehensive HE perspective. The HE approach is often considered in terms of specific unique curriculum intervention. Two thirds of the teachers say they work alone in HE, the other third associate other partners and choose mainly school health services. Parents are rarely (12%) involved in HE initiatives. It is essentially the practice of HE, teacher training and teachers' representation of HE that condition their motivation to develop HE. Teachers can take different approaches to HE. Teachers' representation of HE plays an important role in the development of HE activities: some teachers consider that HE is the mission of the health professionals and the parents. Our expectations of teacher involvement should be realistic, should take into account the representations of their role, the difficulties they encounter, and should be sustained by specific training.

This paper aims to add a reference in revealing spatial thinking. There several definitions of spatial thinking but it is not easy to defining it. We can start to discuss the concept, its basic a forming representation. Initially, the five sense catch the natural phenomenon and forward it to memory for processing. Abstraction plays a role in processing information into a concept. There are two types of representation, namely internal representation and external representation. The internal representation is also known as mental representation; this representation is in the human mind. The external representation may include images, auditory and kinesthetic which can be used to describe, explain and communicate the structure, operation, the function of the object as well as relationships. There are two main elements, representations properties and object relationships. These elements play a role in forming a representation.

to consider how they and their peers are currently confronting representations of mobility. This is particularly timely given the growing academic focus on practices, material mediation, and nonrepresentational theories, as well as on bodily reactions, emotions, and feelings that, according to those theories......As the centerpiece of the eighth T2M yearbook, the following interview about representations of mobility signals a new and exciting focus area for Mobility in History. In future issues we hope to include reviews that grapple more with how mobilities have been imagined and represented in the arts......, literature, and film. Moreover, we hope the authors of future reviews will reflect on the ways they approached those representations. Such commentaries would provide valuable methodological insights, and we hope to begin that effort with this interview. We have asked four prominent mobility scholars...

Full Text Available This article will discuss about the physiological genesis of representation and then it will illustrate the developments, especially in evolutionary perspective, and it will show how these are mainly a result of accidental circumstances, rather than of deliberate intention of improvement. In particular, it will be argue that the representation has behaved like a meme that has arrived to its own progressive evolution coming into symbiosis with the different cultures in which it has spread, and using in this activity human work “unconsciously”. Finally it will be shown how in this action the geometry is an element key, linked to representation both to construct images using graphics operations and to erect buildings using concrete operations.

Full Text Available Over the past decade there has been a move amongst critical cartographers to rethink maps from a post-representational perspective – that is, a vantage point that does not privilege representational modes of thinking (wherein maps are assumed to be mirrors of the world and automatically presumes the ontological security of a map as a map, but rather rethinks and destabilises such notions. This new theorisation extends beyond the earlier critiques of Brian Harley (1989 that argued maps were social constructions. For Harley a map still conveyed the truth of a landscape, albeit its message was bound within the ideological frame of its creator. He thus advocated a strategy of identifying the politics of representation within maps in order to circumnavigate them (to reveal the truth lurking underneath, with the ontology of cartographic practice remaining unquestioned.

Introduction to Computer Data Representation introduces readers to the representation of data within computers. Starting from basic principles of number representation in computers, the book covers the representation of both integer and floating point numbers, and characters or text. It comprehensively explains the main techniques of computer arithmetic and logical manipulation. The book also features chapters covering the less usual topics of basic checksums and 'universal' or variable length representations for integers, with additional coverage of Gray Codes, BCD codes and logarithmic repre

Representations are at the heart of artificial intelligence (AI). This book is devoted to the problem of representation discovery: how can an intelligent system construct representations from its experience? Representation discovery re-parameterizes the state space - prior to the application of information retrieval, machine learning, or optimization techniques - facilitating later inference processes by constructing new task-specific bases adapted to the state space geometry. This book presents a general approach to representation discovery using the framework of harmonic analysis, in particu

A VISUAL SCHEDULING AND MANAGEMENT SYSTEM (VSMS) This work proposes a new system for the visual representation of projects that displays the quantities of work, resources and cost. This new system, called Visual Scheduling and Management System, has a built-in hierarchical system to provide

Additive and Polynomial Representations deals with major representation theorems in which the qualitative structure is reflected as some polynomial function of one or more numerical functions defined on the basic entities. Examples are additive expressions of a single measure (such as the probability of disjoint events being the sum of their probabilities), and additive expressions of two measures (such as the logarithm of momentum being the sum of log mass and log velocity terms). The book describes the three basic procedures of fundamental measurement as the mathematical pivot, as the utiliz

A systematic study of the spinor representation by means of the fermionic physical space is accomplished and implemented. The spinor representation space is shown to be constrained by the Fierz-Pauli-Kofink identities among the spinor bilinear covariants. A robust geometric and topological structure can be manifested from the spinor space, wherein the first and second homotopy groups play prominent roles on the underlying physical properties, associated to fermionic fields. The mapping that changes spinor fields classes is then exemplified, in an Einstein-Dirac system that provides the spacetime generated by a fermion. (orig.)

Abstract Lithium‐ion batteries (LIBs) have been widely used in the field of portable electric devices because of their high energy density and long cycling life. To further improve the performance of LIBs, it is of great importance to develop new electrode materials. Various transition metal oxides (TMOs) have been extensively investigated as electrode materials for LIBs. According to the reaction mechanism, there are mainly two kinds of TMOs, one is based on conversion reaction and the other is based on intercalation/deintercalation reaction. Recently, hierarchically nanostructured TMOs have become a hot research area in the field of LIBs. Hierarchical architecture can provide numerous accessible electroactive sites for redox reactions, shorten the diffusion distance of Li‐ion during the reaction, and accommodate volume expansion during cycling. With rapid research progress in this field, a timely account of this advanced technology is highly necessary. Here, the research progress on the synthesis methods, morphological characteristics, and electrochemical performances of hierarchically nanostructured TMOs for LIBs is summarized and discussed. Some relevant prospects are also proposed. PMID:29593962

Full Text Available Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

A solution to the problem of monitoring the radiation levels in and around a nuclear facility is presented in this paper. This is a private case of a large scale general purpose data acqisition system with high reliability, availability and short maintenance time. The physical layout of the detectors in the plant, and the strict control demands dictated a distributed and hierarchical system. The system is comprised of three levels, each level contains modules. Level one contains the Control modules which collects data from groups of detectors and executes emergency local control tasks. In level two are the Group controllers which concentrate data from the Control modules, and enable local display and communication. The system computer is in level three, enabling the plant operator to receive information from the detectors and execute control tasks. The described system was built and is operating successfully for about two years. (author)

of autonomous consumers. The control system is tasked with balancing electric power production and consumption within the smart grid, and makes active use of the ﬂexibility of a large number of power producing and/or power consuming units. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchical model predictive control (MPC) of smart grid systems. The design consists of a high level MPC controller, a second level of so-called aggregators, which reduces the computational and communication-related load on the high-level control, and a lower level...... on one hand from varying consumption, and on the other hand by natural variations in power production e.g. from wind turbines. The high-level MPC problem is solved using quadratic optimisation, while the aggregator level can either involve quadratic optimisation or simple sorting-based min-max solutions...

We present a hierarchical transform that can be applied to Laplace-like differential equations such as Darcy's equation for single-phase flow in a porous medium. A finite-difference discretization scheme is used to set the equation in the form of an eigenvalue problem. Within the formalism suggested, the pressure field is decomposed into an average value and fluctuations of different kinds and at different scales. The application of the transform to the equation allows us to calculate the unknown pressure with a varying level of detail. A procedure is suggested to localize important features in the pressure field based only on the fine-scale permeability, and hence we develop a form of adaptive coarse graining. The formalism and method are described and demonstrated using two synthetic toy problems.

The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

In this chapter, we discuss the design of adaptive hierarchical organizations for multi-agent systems (MAS). Hierarchical organizations have a number of advantages such as their ability to handle complex problems and their scalability to large organizations. By introducing adaptivity in the

The development of modern theoretical cosmology is presented and some questionable assumptions of orthodox cosmology are pointed out. Suggests that recent observations indicate that hierarchical clustering is a basic factor in cosmology. The implications of hierarchical models of the universe are considered. Bibliography. (LC)

-parametric generative model for hierarchical clustering of similarity based on multifurcating Gibbs fragmentation trees. This allows us to infer and display the posterior distribution of hierarchical structures that comply with the data. We demonstrate the utility of our method on synthetic data and data of functional...

archical networks which are based on the classic scale-free hierarchical networks. ... Weighted hierarchical networks; weight-dependent walks; mean first passage ..... The weighted networks can mimic some real-world natural and social systems to ... the Priority Academic Program Development of Jiangsu Higher Education ...

Going beyond representational anthropology: Re-presenting bodily, emotional and virtual practices in everyday life. Separated youngsters and families in Greenland Greenland is a huge island, with a total of four high-schools. Many youngsters (age 16-18) move far away from home in order to get...

This article compares how Members of Parliament in the United Kingdom and Ireland reflect on constituency service as an aspect of political representation. It differs from existing research on the constituency role of MPs in two regards. First, it approaches the question from a sociological viewp...

Full Text Available In this article we are presenting the results of the comparison study on social representations and causal attributions about cancer. We compared a breast cancer survivors group and control group without own experience of cancer of their own. Although social representations about cancer differ in each group, they are closely related to the concept of suffering, dying and death. We found differences in causal attribution of cancer. In both groups we found a category of risky behavior, which attributes a responsibility for a disease to an individual. Besides these factors we found predominate stress and psychological influences in cancer survivors group. On the other hand control group indicated factors outside the ones control e.g. heredity and environmental factors. Representations about a disease inside person's social space are important in co-shaping the individual process of coping with own disease. Since these representations are not always coherent with the knowledge of modern medicine their knowledge and appreciation in the course of treatment is of great value. We find the findingss of applied social psychology important as starting points in the therapeutic work with patients.

A remarkable progress in women's participation in politics throughout the world was witnessed in the final decade of the 20th century. According to the Inter-Parliamentary Union report, there were only eight countries with no women in their legislatures in 1998. The number of women ministers at the cabinet level worldwide doubled in a decade, and the number of countries without any women ministers dropped from 93 to 48 during 1987-96. However, this progress is far from satisfactory. Political representation of women, minorities, and other social groups is still inadequate. This may be due to a complex combination of socioeconomic, cultural, and institutional factors. The view that women's political participation increases with social and economic development is supported by data from the Nordic countries, where there are higher proportions of women legislators than in less developed countries. While better levels of socioeconomic development, having a women-friendly political culture, and higher literacy are considered favorable factors for women's increased political representation, adopting one of the proportional representation systems (such as a party-list system, a single transferable vote system, or a mixed proportional system with multi-member constituencies) is the single factor most responsible for the higher representation of women.

Full Text Available This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.

Important information in scientific papers can be composed of rhetorical sentences that is structured from certain categories. To get this information, text categorization should be conducted. Actually, some works in this task have been completed by employing word frequency, semantic similarity words, hierarchical classification, and the others. Therefore, this paper aims to present the rhetorical sentence categorization from scientific paper by employing TF-IDF and Word2Vec to capture word frequency and semantic similarity words and employing hierarchical classification. Every experiment is tested in two classifiers, namely Naïve Bayes and SVM Linear. This paper shows that hierarchical classifier is better than flat classifier employing either TF-IDF or Word2Vec, although it increases only almost 2% from 27.82% when using flat classifier until 29.61% when using hierarchical classifier. It shows also different learning model for child-category can be built by hierarchical classifier.

Hierarchical structure with nested nonlocal dependencies is a key feature of human language and can be identified theoretically in most pieces of tonal music. However, previous studies have argued against the perception of such structures in music. Here, we show processing of nonlocal dependencies in music. We presented chorales by J. S. Bach and modified versions in which the hierarchical structure was rendered irregular whereas the local structure was kept intact. Brain electric responses differed between regular and irregular hierarchical structures, in both musicians and nonmusicians. This finding indicates that, when listening to music, humans apply cognitive processes that are capable of dealing with long-distance dependencies resulting from hierarchically organized syntactic structures. Our results reveal that a brain mechanism fundamental for syntactic processing is engaged during the perception of music, indicating that processing of hierarchical structure with nested nonlocal dependencies is not just a key component of human language, but a multidomain capacity of human cognition.

Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchicalrepresentation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

Full Text Available Hierarchical (H- matrices method is a general mathematical framework providing a highly compact representation and efficient numerical arithmetic. When applied in integral-equation- (IE- based computational electromagnetics, H-matrices can be regarded as a fast algorithm; therefore, both the CPU time and memory requirement are reduced significantly. Its kernel independent feature also makes it suitable for any kind of integral equation. To solve H-matrices system, Krylov iteration methods can be employed with appropriate preconditioners, and direct solvers based on the hierarchical structure of H-matrices are also available along with high efficiency and accuracy, which is a unique advantage compared to other fast algorithms. In this paper, a novel sparse approximate inverse (SAI preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solving H-matrices system in electromagnetic applications, and a group of parallel fast direct solvers are developed for dealing with multiple right-hand-side cases. Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional “single level” preconditioners and the practicability of the fast direct solvers for arbitrary complex structures.

Full Text Available Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN, a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequences are too long for LSTM to train due to the gradient vanishing problem. Hence, HDN uses a hierarchical structure, whose first-level LSTM parallelly computes on opcode subsequences (we called them method blocks to learn the dense representations; then the second-level LSTM can learn and detect malware through method block sequences. Considering that malicious behavior only appears in partial sequence segments, HDN uses method block denoise module (MBDM for data denoising by adaptive gradient scaling strategy based on loss cache. We evaluate and compare HDN with the latest mainstream researches on three datasets. The results show that HDN outperforms these Android malware detection methods,and it is able to capture longer sequence features and has better detection efficiency than N-gram-based malware detection which is similar to our method.

As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.

It is well known that there are close relations between classes of singularities and representation theory via the McKay correspondence and between representation theory and vector bundles on projective spaces via the Bernstein-Gelfand-Gelfand construction. These relations however cannot be considered to be either completely understood or fully exploited. These proceedings document recent developments in the area. The questions and methods of representation theory have applications to singularities and to vector bundles. Representation theory itself, which had primarily developed its methods for Artinian algebras, starts to investigate algebras of higher dimension partly because of these applications. Future research in representation theory may be spurred by the classification of singularities and the highly developed theory of moduli for vector bundles. The volume contains 3 survey articles on the 3 main topics mentioned, stressing their interrelationships, as well as original research papers.

We propose a possible embedding of axionic N-flation in type IIB string compactifications where most of the Kähler moduli are stabilised by perturbative effects, and so are hierarchically heavier than the corresponding N>> 1 axions whose collective dynamics drives inflation. This is achieved in the framework of the LARGE Volume Scenario for moduli stabilisation. Our set-up can be used to realise a model of either large field inflation or quintessence, just by varying the volume of the internal space which controls the scale of the axionic potential. Both cases predict a very high scale of supersymmetry breaking. A fully explicit stringy embedding of N-flation would require control over dangerous back-reaction effects due to a large number of species. A viable reheating of the Standard Model degrees of freedom can be achieved after the end of inflation due to the perturbative decay of the N light axions which drive inflation.

We propose a possible embedding of axionic N-flation in type IIB string compactifications where most of the Kähler moduli are stabilised by perturbative effects, and so are hierarchically heavier than the corresponding N>> 1 axions whose collective dynamics drives inflation. This is achieved in the framework of the LARGE Volume Scenario for moduli stabilisation. Our set-up can be used to realise a model of either large field inflation or quintessence, just by varying the volume of the internal space which controls the scale of the axionic potential. Both cases predict a very high scale of supersymmetry breaking. A fully explicit stringy embedding of N-flation would require control over dangerous back-reaction effects due to a large number of species. A viable reheating of the Standard Model degrees of freedom can be achieved after the end of inflation due to the perturbative decay of the N light axions which drive inflation

In this paper, we challenge to solve a reinforcement learning problem for a 5-linked ring robot within a real-time so that the real-robot can stand up to the trial and error. On this robot, incomplete perception problems are caused from noisy sensors and cheap position-control motor systems. This incomplete perception also causes varying optimum actions with the progress of the learning. To cope with this problem, we adopt an actor-critic method, and we propose a new hierarchical policy representation scheme, that consists of discrete action selection on the top level and continuous action selection on the low level of the hierarchy. The proposed hierarchical scheme accelerates learning on continuous action space, and it can pursue the optimum actions varying with the progress of learning on our robotics problem. This paper compares and discusses several learning algorithms through simulations, and demonstrates the proposed method showing application for the real robot.

Full Text Available The visual cortex’s hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes in each representational field (which we equate with the cortical macrocolumn, mac, at each level. In localism, each represented feature/concept/event (hereinafter item is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac’s units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model’s core algorithm, which does both storage and retrieval (inference, makes a single pass over all macs on each time step, the overall model’s storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (Big Data problems. A 2010 paper described a non-hierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level, describing novel model principles like progressive critical periods, dynamic modulation of principal cells’ activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of

Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.

Full Text Available The key issue of the present paper is clustering of narrow-domain short texts, such as scientific abstracts. The work is based on the observations made when improving the performance of key phrase extraction algorithm. An extended stop-words list was used that was built automatically for the purposes of key phrase extraction and gave the possibility for a considerable quality enhancement of the phrases extracted from scientific publications. A description of the stop- words list creation procedure is given. The main objective is to investigate the possibilities to increase the performance and/or speed of clustering by the above-mentioned list of stop-words as well as information about lexeme parts of speech. In the latter case a vocabulary is applied for the document representation, which contains not all the words that occurred in the collection, but only nouns and adjectives or their sequences encountered in the documents. Two base clustering algorithms are applied: k-means and hierarchical clustering (average agglomerative method. The results show that the use of an extended stop-words list and adjective-noun document representation makes it possible to improve the performance and speed of k-means clustering. In a similar case for average agglomerative method a decline in performance quality may be observed. It is shown that the use of adjective-noun sequences for document representation lowers the clustering quality for both algorithms and can be justified only when a considerable reduction of feature space dimensionality is necessary.

tangible building blocks. We learned that all participants, most of whom had little experience in visualization authoring, were readily able to create and talk about their own visualizations. Based on our observations, we discuss participants’ actions during the development of their visual representations......The accessibility of infovis authoring tools to a wide audience has been identified as a major research challenge. A key task in the authoring process is the development of visual mappings. While the infovis community has long been deeply interested in finding effective visual mappings......, comparatively little attention has been placed on how people construct visual mappings. In this paper, we present the results of a study designed to shed light on how people transform data into visual representations. We asked people to create, update and explain their own information visualizations using only...

This paper sets out a view about the explanatory role of representational content and advocates one approach to naturalising content – to giving a naturalistic account of what makes an entity a representation and in virtue of what it has the content it does. It argues for pluralism about the metaphysics of content and suggests that a good strategy is to ask the content question with respect to a variety of predictively successful information processing models in experimental psychology and cognitive neuroscience; and hence that data from psychology and cognitive neuroscience should play a greater role in theorising about the nature of content. Finally, the contours of the view are illustrated by drawing out and defending a surprising consequence: that individuation of vehicles of content is partly externalist. PMID:24563661

Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.

Full Text Available Hierarchical organizations of information processing in the brain networks have been known to exist and widely studied. To find proper hierarchical structures in the macaque brain, the traditional methods need the entire pairwise hierarchical relationships between cortical areas. In this paper, we present a new method that discovers hierarchical structures of macaque brain networks by using partial information of pairwise hierarchical relationships. Our method uses a graph-based manifold learning to exploit inherent relationship, and computes pseudo distances of hierarchical levels for every pair of cortical areas. Then, we compute hierarchy levels of all cortical areas by minimizing the sum of squared hierarchical distance errors with the hierarchical information of few cortical areas. We evaluate our method on the macaque brain data sets whose true hierarchical levels are known as the FV91 model. The experimental results show that hierarchy levels computed by our method are similar to the FV91 model, and its errors are much smaller than the errors of hierarchical clustering approaches.

This EuroBroadMap working paper presents an analysis of textbooks dealing with the representations of Europe and European Union. In most of these textbooks from secondary school, the teaching of the geography of Europe precedes the evocation of the EU. Europe is often depicted as a given object, reduced to a number of structural aspects (relief, climate, demography, traditional cultures, economic activities, etc.) whose only common point is their location within conventional boundaries. Such ...

Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

Full Text Available Acceptable use policies (AUPs are vital tools for organizations to protect themselves and their employees from misuse of computer facilities provided. A well structured, thorough AUP is essential for any organization. It is impossible for an effective AUP to deal with every clause and remain readable. For this reason, some sections of an AUP carry more weight than others, denoting importance. The methodology used to develop the hierarchical analysis is a literature review, where various sources were consulted. This hierarchical approach to AUP analysis attempts to highlight important sections and clauses dealt with in an AUP. The emphasis of the hierarchal analysis is to prioritize the objectives of an AUP.

Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat

The present invention is a structure, method of making and method of use for a novel macroscopic hierarchically structured, nitrogen-doped, nano-porous carbon membrane (HNDCMs) with asymmetric and hierarchical pore architecture that can be produced on a large-scale approach. The unique HNDCM holds great promise as components in separation and advanced carbon devices because they could offer unconventional ﬂuidic transport phenomena on the nanoscale. Overall, the invention set forth herein covers a hierarchically structured, nitrogen-doped carbon membranes and methods of making and using such a membranes.

The year 1897 was marked by two important mathematical events: the publication of the first paper on representations of finite groups by Ferdinand Georg Frobenius (1849-1917) and the appearance of the first treatise in English on the theory of finite groups by William Burnside (1852-1927). Burnside soon developed his own approach to representations of finite groups. In the next few years, working independently, Frobenius and Burnside explored the new subject and its applications to finite group theory. They were soon joined in this enterprise by Issai Schur (1875-1941) and some years later, by Richard Brauer (1901-1977). These mathematicians' pioneering research is the subject of this book. It presents an account of the early history of representation theory through an analysis of the published work of the principals and others with whom the principals' work was interwoven. Also included are biographical sketches and enough mathematics to enable readers to follow the development of the subject. An introductor...

This book is a comprehensive treatment of the representation theory of maximal Cohen-Macaulay (MCM) modules over local rings. This topic is at the intersection of commutative algebra, singularity theory, and representations of groups and algebras. Two introductory chapters treat the Krull-Remak-Schmidt Theorem on uniqueness of direct-sum decompositions and its failure for modules over local rings. Chapters 3-10 study the central problem of classifying the rings with only finitely many indecomposable MCM modules up to isomorphism, i.e., rings of finite CM type. The fundamental material--ADE/simple singularities, the double branched cover, Auslander-Reiten theory, and the Brauer-Thrall conjectures--is covered clearly and completely. Much of the content has never before appeared in book form. Examples include the representation theory of Artinian pairs and Burban-Drozd's related construction in dimension two, an introduction to the McKay correspondence from the point of view of maximal Cohen-Macaulay modules, Au...

Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchicalrepresentations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to

Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchicrepresentation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

This work proposes a hierarchical Design Space Exploration (DSE) for the design of multi-processor platforms targeted to specific applications with strict timing and area constraints. In particular, it considers platforms integrating multiple Application Specific Instruction Set Processors (ASIPs...

An optical device includes an active region and packaging glass located on top of the active region. A top surface of the packaging glass includes hierarchical nanostructures comprised of honeycombed nanowalls (HNWs) and nanorod (NR) structures extending from the HNWs.

An optical device includes an active region and packaging glass located on top of the active region. A top surface of the packaging glass includes hierarchical nanostructures comprised of honeycombed nanowalls (HNWs) and nanorod (NR) structures

In this paper we try to define the difference between hierarchical organization and self-organization. Organization is defined as a structure with a function. So we can define the difference between hierarchical organization and self-organization both on the structure as on the function. In the next two chapters these two definitions are given. For the structure we will use some existing definitions in graph theory, for the function we will use existing theory on (self-)organization. In the t...

. In current practice, structures are often optimized individually without considering benefits of having a hierarchy of protection structures. It is here argued, that the joint consideration of hierarchically integrated protection structures is beneficial. A hierarchical decision model is utilized to analyze...... and compare the benefit of large upstream protection structures and local downstream protection structures in regard to epistemic uncertainty parameters. Results suggest that epistemic uncertainty influences the outcome of the decision model and that, depending on the magnitude of epistemic uncertainty...

This project developed a robust, tunable, hierarchical nanoceramics materials platform for industrial process sensors in harsh-environments. Control of material structure at multiple length scales from nano to macro increased the sensing response of the materials to combustion gases. These materials operated at relatively high temperatures, enabling detection close to the source of combustion. It is anticipated that these materials can form the basis for a new class of sensors enabling widespread use of efficient combustion processes with closed loop feedback control in the energy-intensive industries. The first phase of the project focused on materials selection and process development, leading to hierarchical nanoceramics that were evaluated for sensing performance. The second phase focused on optimizing the materials processes and microstructures, followed by validation of performance of a prototype sensor in a laboratory combustion environment. The objectives of this project were achieved by: (1) synthesizing and optimizing hierarchical nanostructures; (2) synthesizing and optimizing sensing nanomaterials; (3) integrating sensing functionality into hierarchical nanostructures; (4) demonstrating material performance in a sensing element; and (5) validating material performance in a simulated service environment. The project developed hierarchical nanoceramic electrodes for mixed potential zirconia gas sensors with increased surface area and demonstrated tailored electrocatalytic activity operable at high temperatures enabling detection of products of combustion such as NOx close to the source of combustion. Methods were developed for synthesis of hierarchical nanostructures with high, stable surface area, integrated catalytic functionality within the structures for gas sensing, and demonstrated materials performance in harsh lab and combustion gas environments.

Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

out of place in a novel belonging to the serious combat literature of the Catholic Revival, and the direct representation of the supernatural is also surprising because previous Catholic Revival novelists, such as Léon Bloy and Karl-Joris Huysmans, maintain a realistic, non-magical world and deal...... Satan episode in Under Satan’s Sun is neither a break with the seriousness nor with the realism of the Catholic novel. On the basis of Tvetan Todorov’s definition of the traditional fantastic tale, the analysis shows that only the beginning of the fantastic episode follows Todorov’s definition...

out of place in a novel belonging to the serious combat literature of the Catholic Revival, and the direct representation of the supernatural is also surprising because previous Catholic Revival novelists, such as Léon Bloy and Karl-Joris Huysmans, maintain a realistic, non-magical world and deal...... Satan episode in Under Satan’s Sun is neither a break with the seriousness nor with the realism of the Catholic novel. On the basis of Tvetan Todorov’s definition of the traditional fantastic tale, the analysis shows that only the beginning of the fantastic episode follows Todorov’s definition...

Representations of Commonsense Knowledge provides a rich language for expressing commonsense knowledge and inference techniques for carrying out commonsense knowledge. This book provides a survey of the research on commonsense knowledge.Organized into 10 chapters, this book begins with an overview of the basic ideas on artificial intelligence commonsense reasoning. This text then examines the structure of logic, which is roughly analogous to that of a programming language. Other chapters describe how rules of universal validity can be applied to facts known with absolute certainty to deduce ot

This paper seeks to explore how prayer and praying practice are reflected in archaeological sources. Apart from objects directly involved in the personal act of praying, such as rosaries and praying books, churches and religious foundations played a major role in the medieval system of intercession....... At death, an indi- vidual’s corpse and burial primarily reflect the social act of representation during the funeral. The position of the arms, which have incorrectly been used as a chronological tool in Scandinavia, may indicate an evolution from a more collective act of prayer up to the eleventh century...

It is well known that matched filtering techniques cannot be applied for searching extensive parameter space volumes for continuous gravitational wave signals. This is the reason why alternative strategies are being pursued. Hierarchical strategies are best at investigating a large parameter space when there exist computational power constraints. Algorithms of this kind are being implemented by all the groups that are developing software for analyzing the data of the gravitational wave detectors that will come online in the next years. In this talk I will report about the hierarchical Hough transform method that the GEO 600 data analysis team at the Albert Einstein Institute is developing. The three step hierarchical algorithm has been described elsewhere [8]. In this talk I will focus on some of the implementational aspects we are currently concerned with. (author)

Full Text Available The article stresses the relationship between Explicit and Implicit theories of Intelligence. Following the line of common sense epistemology and the theory of Social Representations, a study was carried out in order to analyze naive’s explanations about Intelligence Definitions. Based on Mugny & Carugati (1989 research, a self-administered questionnaire was designed and filled in by 286 subjects. Results are congruent with the main hyphotesis postulated: A general overlap between explicit and implicit theories showed up. According to the results Intelligence appears as both, a social attribute related to social adaptation and as a concept defined in relation with contextual variables similar to expert’s current discourses. Nevertheless, conceptions based on “gifted ideology” still are present stressing the main axes of Intelligence debate: biological and sociological determinism. In the same sense, unfamiliarity and social identity are reaffirmed as organizing principles of social representation. The distance with the object -measured as the belief in intelligence differences as a solve/non solve problem- and the level of implication with the topic -teachers/no teachers- appear as discriminating elements at the moment of supporting specific dimensions.

In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

Silicon nanostructures have been cultivated as promising surface enhanced Raman scattering (SERS) substrates in terms of their low-loss optical resonance modes, facile functionalization, and compatibility with today’s state-of-the-art CMOS techniques. However, unlike their plasmonic counterparts, the electromagnetic field enhancements induced by silicon nanostructures are relatively small, which restrict their SERS sensing limit to around 10-7 M. To tackle this problem, we propose here a strategy for improving the SERS performance of silicon nanostructures by constructing silicon hierarchical nanostructures with a superhydrophobic surface. The hierarchical nanostructures are binary structures consisted of silicon nanowires (NWs) grown on micropyramids (MPs). After being modified with perfluorooctyltriethoxysilane (PFOT), the nanostructure surface shows a stable superhydrophobicity with a high contact angle of ˜160°. The substrate can allow for concentrating diluted analyte solutions into a specific area during the evaporation of the liquid droplet, whereby the analytes are aggregated into a small volume and can be easily detected by the silicon nanostructure SERS substrate. The analyte molecules (methylene blue: MB) enriched from an aqueous solution lower than 10-8 M can be readily detected. Such a detection limit is ˜100-fold lower than the conventional SERS substrates made of silicon nanostructures. Additionally, the detection limit can be further improved by functionalizing gold nanoparticles onto silicon hierarchical nanostructures, whereby the superhydrophobic characteristics and plasmonic field enhancements can be combined synergistically to give a detection limit down to ˜10-11 M. A gold nanoparticle-functionalized superhydrophobic substrate was employed to detect the spiked melamine in liquid milk. The results showed that the detection limit can be as low as 10-5 M, highlighting the potential of the proposed superhydrophobic SERS substrate in

Graphical abstract: Hierarchical BiOBr microspheres were prepared from a bromine-containing ionic liquid. The material was found effective for removing heavy metals, degrading organic pollutants and killing bacteria. Highlight: Black-Right-Pointing-Pointer Ionothermal synthesis of BiOBr microspheres with hierarchical structure. Black-Right-Pointing-Pointer Efficient mass transfer and excellent light-harvesting ability. Black-Right-Pointing-Pointer Suitable for removing heavy metals and treatment of organic dyes. Black-Right-Pointing-Pointer Remarkable photocatalytic bactericidal property. - Abstract: Bismuth oxybromide (BiOBr) micropsheres with hierarchical morphologies have been fabricated via an ionothermal synthesis route. Ionic liquid acts as a unique soft material capable of promoting nucleation and in situ growth of 3D hierarchical BiOBr mesocrystals without the help of surfactants. The as-prepared BiOBr nanomaterials can effectively remove heavy metal ions and organic dyes from wastewater. They can also kill Micrococcus lylae, a Gram positive bacterium, in water under fluorescent light irradiation. Their high adaptability in water treatment may be ascribed to their hierarchical structure, allowing them high surface to volume ratio, facile species transportation and excellent light-harvesting ability.

Graphical abstract: Hierarchical BiOBr microspheres were prepared from a bromine-containing ionic liquid. The material was found effective for removing heavy metals, degrading organic pollutants and killing bacteria. Highlight: ► Ionothermal synthesis of BiOBr microspheres with hierarchical structure. ► Efficient mass transfer and excellent light-harvesting ability. ► Suitable for removing heavy metals and treatment of organic dyes. ► Remarkable photocatalytic bactericidal property. - Abstract: Bismuth oxybromide (BiOBr) micropsheres with hierarchical morphologies have been fabricated via an ionothermal synthesis route. Ionic liquid acts as a unique soft material capable of promoting nucleation and in situ growth of 3D hierarchical BiOBr mesocrystals without the help of surfactants. The as-prepared BiOBr nanomaterials can effectively remove heavy metal ions and organic dyes from wastewater. They can also kill Micrococcus lylae, a Gram positive bacterium, in water under fluorescent light irradiation. Their high adaptability in water treatment may be ascribed to their hierarchical structure, allowing them high surface to volume ratio, facile species transportation and excellent light-harvesting ability.

We numerically examine slow and hierarchical relaxation dynamics of interacting bosons described by a tilted two-band Bose-Hubbard model. The system is found to exhibit signatures of quantum chaos within the spectrum and the validity of the eigenstate thermalization hypothesis for relevant physical observables is demonstrated for certain parameter regimes. Using the truncated Wigner representation in the semiclassical limit of the system, dynamics of relevant observables reveal hierarchical relaxation and the appearance of prethermalized states is studied from the perspective of statistics of the underlying mean-field trajectories. The observed prethermalization scenario can be attributed to different stages of glassy dynamics in the mode-time configuration space due to dynamical phase transition between ergodic and nonergodic trajectories.

In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability.The characteristics of typical ANN models are fixed connections,with evolved weights,globalized representations,and globalized optimizations,all based on a mathematical approach.This makes those models to be deficient in robustness,efficiency of learning,capacity,anti-jamming between training sets,and correlativity of samples,etc.In this paper,we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing.A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures.The basic characteristics of this model are localized and random connections,field limitations of neuron fan-in and fan-out,dynamic behavior of neurons,and samples represented through different sub-circuits of neurons specialized into different response patterns.At the end of this paper,some important aspects of error correction,capacity,learning efficiency,and soundness of structural representation are analyzed theoretically.This paper has demonstrated the feasibility and advantages of structure learning and representation.This model can serve as a fundamental element of cognitive systems such as perception and associative memory.Key-words structure learning,representation,associative memory,computational neuroscience

To explore the fascinating inter-individual interaction mechanism governing the abundant biological grouping behaviors, more and more efforts have been devoted to collective motion investigation in recent years. Therein, bird flocking is one of the most intensively studied behaviors. A previous study (Nagy M. et al., Nature, 464 (2010) 890.) claims the existence of a well-defined hierarchical structure in pigeon flocks, which implies that a multi-layer leadership network leads to the occurrence of highly coordinated pigeon flock movements. However, in this study, by using high-resolution GPS data of homing flight of pigeon flocks, we reveal an explicit switching hierarchical mechanism underlying the group motions of pigeons. That is, a pigeon flock has a long-term leader for smooth moving trajectories, whereas the leading tenure passes to a temporary one upon sudden turns or zigzags. Therefore, the present observation helps explore more deeply into the principle of a huge volume of bird flocking dynamics. Meanwhile, from the engineering point of view, it may shed some light onto industrial multi-robot coordination and unmanned air vehicle formation control.

Large-scale graph computing has become critical due to the ever-increasing size of data. However, distributed graph computations are limited in their scalability and performance due to the heavy communication inherent in such computations. This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each of the neighbors of a vertex. Our proposed approach recognizes this, and reduces communication performed by the algorithm without change to user-code, through a hierarchical machine model imposed upon the input graph. The hierarchical model takes advantage of locale information of the neighboring vertices to reduce communication, both in message volume and total number of bytes sent. It is also able to better exploit the machine hierarchy to further reduce the communication costs, by aggregating traffic between different levels of the machine hierarchy. Results of an implementation in the STAPL GL shows improved scalability and performance over the traditional level-synchronous approach, with 2.5 × - 8× improvement for a variety of graph algorithms at 12, 000+ cores.

In the field of computational biology, microarryas are used to measure the activity of thousands of genes at once and create a global picture of cellular function. Microarrays allow scientists to analyze expression of many genes in a single experiment quickly and eficiently. Even if microarrays are a consolidated research technology nowadays and the trends in high-throughput data analysis are shifting towards new technologies like Next Generation Sequencing (NGS), an optimum method for sample...

to jointly estimate the optimal number of sound clusters, to cluster the blocks, and to estimate the transition probabilities between clusters. The result is a segmentation of the input into a sequence of symbols (typically corresponding to hits of hi-hat, snare, bass, cymbal, etc.) that can be evaluated...

Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.

The parental representations of 30 male-to-female transsexuals were rated using a measure of fundamental parental dimensions and shown to be of acceptable validity as a measure both of perceived and actual parental characteristics. Scores on that measure were compared separately against scores returned by matched male and female controls. The transsexuals did not differ from the male controls in their scoring of their mothers but did score their fathers as less caring and more overprotective. These differences were weaker for the comparisons made against the female controls. Item analyses suggested that the greater paternal "overprotection" experienced by transsexuals was due to their fathers being perceived as offering less encouragement to their sons' independence and autonomy. Several interpretations of the findings are considered.

The central research problem of this project is the effective representation, computation, and display of surfaces interpolating to information in three or more dimensions. If the given information is located on another surface, then the problem is to construct a surface defined on a surface''. Sometimes properties of an already defined surface are desired, which is geometry processing''. Visualization of multivariate surfaces is possible by means of contouring higher dimensional surfaces. These problems and more are discussed below. The broad sweep from constructive mathematics through computational algorithms to computer graphics illustrations is utilized in this research. The breadth and depth of this research activity makes this research project unique.

Sinusoidal representation of acoustic signals has been an important tool in speech and music processing like signal analysis, synthesis and time scale or pitch modifications. It can be applicable to arbitrary signals, which is an important advantage over other signal representations like physical modeling of acoustic signals. In sinusoidal representation, acoustic signals are composed as sums of sinusoid (sine wave) with different amplitudes, frequencies and phases, which is based on the timedependent short-time Fourier transform (STFT). This article describes the principles of acoustic signal analysis/synthesis based on a sinusoid representation with focus on sine waves with rapidly varying frequency.

The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

Hierarchical structures are very common in nature, but only recently have they been systematically studied in materials science, in order to understand the specific effects they can have on the mechanical properties of various systems. Structural hierarchy provides a way to tune and optimize macroscopic mechanical properties starting from simple base constituents and new materials are nowadays designed exploiting this possibility. This can be true also in the field of tribology. In this paper we study the effect of hierarchical patterned surfaces on the static and dynamic friction coefficients of an elastic material. Our results are obtained by means of numerical simulations using a one-dimensional spring-block model, which has previously been used to investigate various aspects of friction. Despite the simplicity of the model, we highlight some possible mechanisms that explain how hierarchical structures can significantly modify the friction coefficients of a material, providing a means to achieve tunability.

We introduce HD (or “Hierarchical-Deep”) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian (HB) models. Specifically, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a deep Boltzmann machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training example by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets.

Abstract: Aerosol-assisted assembly of mesoporous silica particles with hierarchically controllable pore structure has been prepared using cetyltrimethylammonium bromide (CTAB) and poly(propylene oxide) (PPO, H[OCH(CH3)CH2],OH) as co-templates. Addition of the hydrophobic PPO significantly...... influences the delicate hydrophilic-hydrophobic balance in the well-studied CTAB-silicate co-assembling system, resulting in various mesostructures (such as hexagonal, lamellar, and hierarchical structure). The co-assembly of CTAB, silicate clusters, and a low-molecular-weight PPO (average M-n 425) results...... in a uniform lamellar structure, while the use of a high-molecular-weight PPO (average M-n 2000), which is more hydrophobic, leads to the formation of hierarchical pore structure that contains meso-meso or meso-macro pore structure. The role of PPO additives on the mesostructure evolution in the CTAB...

Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

Full Text Available According to theoretical considerations, multiplicity of hierarchical stellar systems can reach, depending on masses and orbital parameters, several hundred, while observational data confirm the existence of at most septuple (seven-component systems. In this study, we cross-match the stellar systems of very high multiplicity (six and more components in modern catalogues of visual double and multiple stars to find among them the candidates to hierarchical systems. After cross-matching the catalogues of closer binaries (eclipsing, spectroscopic, etc., some of their components were found to be binary/multiple themselves, what increases the system's degree of multiplicity. Optical pairs, known from literature or filtered by the authors, were flagged and excluded from the statistics. We compiled a list of hierarchical systems with potentially very high multiplicity that contains ten objects. Their multiplicity does not exceed 12, and we discuss a number of ways to explain the lack of extremely high multiplicity systems.

In this paper, the wettability properties of coatings with hierarchical surface structures and low surface energy were studied. Hierarchically structured coatings were produced by using hydrophobic fumed silica nanoparticles and polytetrafluoroethylene (PTFE) microparticles as additives in polyester (PES) and polyvinyldifluoride (PVDF). These particles created hierarchical micro-nano structures on the paint surfaces and lowered or supported the already low surface energy of the paint. Two standard application techniques for paint application were employed and the presented coatings are suitable for mass production and use in large surface areas. By regulating the particle concentrations, it was possible to modify wettability properties gradually. Highly hydrophobic surfaces were achieved with the highest contact angle of 165∘. Dynamic contact angle measurements were carried out for a set of selected samples and low hysteresis was obtained. Produced coatings possessed long lasting durability in the air and in underwater conditions.

As a result of capillary forces, animal hairs, carbon nanotubes or nanowires of a periodically or randomly distributed array often assemble into hierarchical structures. In this paper, the energy method is adopted to analyse the capillary adhesion of microsized hairs, which are modelled as clamped microcantilevers wetted by liquids. The critical conditions for capillary adhesion of two hairs, three hairs or two bundles of hairs are derived in terms of Young's contact angle, elastic modulus and geometric sizes of the beams. Then, the hierarchical capillary adhesion of hairs is addressed. It is found that for multiple hairs or microcantilevers, the system tends to take a hierarchical structure as a result of the minimization of the total potential energy of the system. The level number of structural hierarchy increases with the increase in the number of hairs if they are sufficiently long. Additionally, we performed experiments to verify our theoretical solutions for the adhesion of microbeams

Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchically structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance. In this report, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks. 15 refs., 19 figs., 32 tabs. (Author)

This volume, developed in honor of Dr. Dundar F. Kocaoglu, aims to demonstrate the applications of the Hierarchical Decision Model (HDM) in different sectors and its capacity in decision analysis. It is comprised of essays from noted scholars, academics and researchers of engineering and technology management around the world. This book is organized into four parts: Technology Assessment, Strategic Planning, National Technology Planning and Decision Making Tools. Dr. Dundar F. Kocaoglu is one of the pioneers of multiple decision models using hierarchies, and creator of the HDM in decision analysis. HDM is a mission-oriented method for evaluation and/or selection among alternatives. A wide range of alternatives can be considered, including but not limited to, different technologies, projects, markets, jobs, products, cities to live in, houses to buy, apartments to rent, and schools to attend. Dr. Kocaoglu’s approach has been adopted for decision problems in many industrial sectors, including electronics rese...

Mnemonic processing engages multiple systems that cooperate and compete to support task performance. Exploring these systems' interaction requires memory tasks that produce rich data with multiple patterns of performance sensitive to different processing sub-components. Here we present a novel context-dependent relational memory paradigm designed to engage multiple learning and memory systems. In this task, participants learned unique face-room associations in two distinct contexts (i.e., different colored buildings). Faces occupied rooms as determined by an implicit gender-by-side rule structure (e.g., male faces on the left and female faces on the right) and all faces were seen in both contexts. In two experiments, we use behavioral and eye-tracking measures to investigate interactions among different memory representations in both younger and older adult populations; furthermore we link these representations to volumetric variations in hippocampus and ventromedial PFC among older adults. Overall, performance was very accurate. Successful face placement into a studied room systematically varied with hippocampal volume. Selecting the studied room in the wrong context was the most typical error. The proportion of these errors to correct responses positively correlated with ventromedial prefrontal volume. This novel task provides a powerful tool for investigating both the unique and interacting contributions of these systems in support of relational memory.

In this chapter the role of electron transfer in determining the behaviour of the ATP synthesising enzyme in E. coli is analysed. It is concluded that the latter enzyme lacks control because of special properties of the electron transfer components. These properties range from absence of a strong...... back pressure by the protonmotive force on the rate of electron transfer to hierarchical regulation of the expression of the gens that encode the electron transfer proteins as a response to changes in the bioenergetic properties of the cell.The discussion uses Hierarchical Control Analysis...

Validating security protocols is a well-known hard problem even in a simple setting of a single global network. But a real network often consists of, besides the public-accessed part, several sub-networks and thereby forms a hierarchical structure. In this paper we first present a process calculus...... capturing the characteristics of hierarchical networks and describe the behavior of protocols on such networks. We then develop a static analysis to automate the validation. Finally we demonstrate how the technique can benefit the protocol development and the design of network systems by presenting a series...

Microgrids have become a hot topic driven by the dual pressures of environmental protection concerns and the energy crisis. In this paper, a challenge for the distributed control of a modern electric grid incorporating clusters of residential microgrids is elaborated and a hierarchical multi-agent system (MAS) is proposed as a solution. The issues of how to realize the hierarchical MAS and how to improve coordination and control strategies are discussed. Based on MATLAB and ZEUS platforms, bilateral switching between grid-connected mode and island mode is performed under control of the proposed MAS to enhance and support its effectiveness. (authors)

We propose a scheme for multiparty hierarchical quantum-information splitting (QIS) with a multipartite entangled state, where a boss distributes a secret quantum state to two grades of agents asymmetrically. The agents who belong to different grades have different authorities for recovering the boss's secret. Except for the boss's Bell-state measurement, no nonlocal operation is involved. The presented scheme is also shown to be secure against eavesdropping. Such a hierarchical QIS is expected to find useful applications in the field of modern multipartite quantum cryptography.

Initial delivery for mathematical analysis of the Omega Ontology. We provide an analysis of the hierarchical structure of a version of the Omega Ontology currently in use within the US Government. After providing an initial statistical analysis of the distribution of all link types in the ontology, we then provide a detailed order theoretical analysis of each of the four main hierarchical links present. This order theoretical analysis includes the distribution of components and their properties, their parent/child and multiple inheritance structure, and the distribution of their vertical ranks.

In this paper we study representations of the projective modular group induced from the Hecke congruence group of level 4 with Selberg's character. We show that the well known congruence properties of Selberg's character are equivalent to the congruence properties of the induced representations...

We give the reduction of the energy representation of the group of mappings from I = [ 0,1 ], S 1 , IRsub(+) or IR into a compact semi simple Lie group G. For G = SU(2) we prove the factoriality of the representation, which is of type III in the case I = IR

Teachers and students commonly use various concrete representations during mathematical instruction. These representations can be utilized to help students understand mathematical concepts and processes, increase flexibility of thinking, facilitate problem solving, and reduce anxiety while doing mathematics. Unfortunately, the manner in which some…

Reviews different structures and techniques of knowledge representation: structure of database records and files, data structures in computer programming, syntatic and semantic structure of natural language, knowledge representation in artificial intelligence, and models of human memory. A prototype expert system that makes use of some of these…

The purpose of the thesis is to describe the possibilities for fixing the position of a company in the market through contracts for commercial representation with a focus to finding legal and economic impact on the company that contracted for exclusive representation.

In this article I examine three examples of philosophical theories of scientific representation with the aim of assessing which of these is a good candidate for a philosophical theory of scientific representation in science learning. The three candidate theories are Giere's intentional approach, Suárez's inferential approach and Lynch and…

Full Text Available Objects frequently have a hierarchical organization (tree-branch-leaf. How do we select the level to be attended? This has been explored with compound letters: a global letter built from local letters. One explanation, backed by much empirical support, is that attentional competition is biased towards certain spatial frequency (SF bands across all locations and objects (a SF filter. This view assumes that the global and local letters are carried respectively by low and high SF bands, and that the bias can persist over time. Here we advocate a complementary view in which perception of hierarchical level is determined by how we represent each object-file. Although many properties bound to an object-file (i.e. position, color, even shape can mutate without affecting its persistence over time, we posit that same object-file cannot be used to store information from different hierarchical levels. Thus selection of level would be independent from locations but not from the way objects are represented at each moment. These views were contrasted via an attentional blink paradigm that presented letters within compound figures, but only one level at a time. Attending to two letters in rapid succession was easier if they were at the same- compared to different-levels, as predicted by both accounts. However, only the object-file account was able to explain why it was easier to report two targets on the same moving object compared to the same targets on distinct objects. The interference of different masks on target recognition was also easier to predict by the object-file account than by the SF filter. The methods introduced here allowed us to investigate attention to hierarchical levels and to objects within the same empirical framework. The data suggests that SF information is used to structure the internal organization of object representations, a process understood best by integrating object-file theory with previous models of hierarchical perception.

The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

Full Text Available For the author of this article, the media’s treatment of Islam has raised numerous polymorphous questions and debates. Reactivated by the great scares of current events, the issue, though an ancient one, calls many things into question. By way of introduction, the author tries to analyse the complex processes of elaboration and perception of the representations that have prevailed during the past century. In referring to the semantic decoding of the abundant colonial literature and iconography, the author strives to translate the extreme xenophobic tensions and the identity crystallisations associated with the current media orchestration of Islam, both in theWest and the East. He then evokes the excesses of the media that are found at the origin of many amalgams wisely maintained between Islam, Islamism and Islamic terrorism, underscoring their duplicity and their willingness to put themselves, consciously, in service to deceivers and directors of awareness, who are very active at the heart of the politico-media sphere. After levelling a severe accusation against the harmful drifts of the media, especially in times of crisis and war, the author concludes by asserting that these tools of communication, once they are freed of their masks and invective apparatuses, can be re-appropriated by new words and bya true communication between peoples and cultures.

The entire data base for the dependence of the nonstoichiometry, x, on temperature and chemical potential of oxygen (oxygen potential) was retrieved from the literature and represented. This data base was interpreted by least-squares analysis using equations derived from the classical thermodynamic theory for the solid solution of a solute in a solvent. For hyperstoichiometric oxide at oxygen potentials more positive than -266700 + 16.5T kJ/mol, the data were best represented by a [UO 2 ]-[U 3 O 7 ] solution. For O/U ratios above 2 and oxygen potentials below this boundary, a [UO 2 ]-[U 2 O 4 . 5 ] solution represented the data. The data were represented by a [UO 2 ]-[U 1 / 3 ] solution. The resulting equations represent the experimental ln(PO 2 ) - ln(x) behavior and can be used in thermodynamic calculations to predict phase boundary compositions consistent with the literature. Collectively, the present analysis permits a mathematical representation of the behavior of the total data base

The t-plot method is a well-known technique which allows determining the micro- and/or mesoporous volumes and the specific surface area of a sample by comparison with a reference adsorption isotherm of a nonporous material having the same surface chemistry. In this paper, the validity of the t-plot method is discussed in the case of hierarchical porous materials exhibiting both micro- and mesoporosities. Different hierarchical zeolites with MCM-41 type ordered mesoporosity are prepared using pseudomorphic transformation. For comparison, we also consider simple mechanical mixtures of microporous and mesoporous materials. We first show an intrinsic failure of the t-plot method; this method does not describe the fact that, for a given surface chemistry and pressure, the thickness of the film adsorbed in micropores or small mesopores (plot method to estimate the micro- and mesoporous volumes of hierarchical samples is then discussed, and an abacus is given to correct the underestimated microporous volume by the t-plot method.

We describe an architecture for the hierarchical distribution of multimedia broadcasts in the future mobile Internet. The architecture supports network as well as application-layer mobility solutions, and uses stream control functions that are influenced by available network resources, user-defined

In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

Most of the complex systems seen in real life also have associated dynamics [10], and the ... another example, this time a hierarchical structure, viz., the Cayley tree with b ..... natural constraints operating on networks in real life, such as the ...

This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based...

This paper presents a distributed hierarchical control framework to ensure reliable operation of dc Microgrid (MG) clusters. In this hierarchy, primary control is used to regulate the common bus voltage inside each MG locally. An adaptive droop method is proposed for this level which determines...

as nanoparticles in the binder, or polycrystalline, aggregate-like reinforcements, also at several scale levels). Such materials can ensure better productivity, efficiency, and lower costs of drilling, cutting, grinding, and other technological processes. This article reviews the main groups of hierarchical...

A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

A new method to pre-segment images by means of a hierarchical description is proposed. This description is obtained from an investigation of the deep structure of a scale space image – the input image and the Gaussian filtered ones simultaneously. We concentrate on scale space critical points –

In this work, we propose a hierarchical extension of the polygonality index as the means to characterize geographical planar networks. By considering successive neighborhoods around each node, it is possible to obtain more complete information about the spatial order of the network at progressive spatial scales. The potential of the methodology is illustrated with respect to synthetic and real geographical networks

The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

This paper presents a new multi-resolution volumerepresentation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

Graphical abstract: Hierarchical CuO microspheres with hollow interiors were formed through self-wrapping of a single layer of radically oriented CuO nanorods, and these microspheres showed excellent cycle performance and enhanced lithium storage capacity. Display Omitted Research highlights: → Hierarchical CuO hollow microspheres were prepared by a hydrothermal method. → The CuO hollow microspheres were assembled from radically oriented nanorods. → The growth mechanism was proposed to proceed via self-assembly and Ostwald's ripening. → The microspheres showed good cycle performance and enhanced lithium storage capacity. → Hierarchical microstructures with hollow interiors promote electrochemical property. - Abstract: In this work, hierarchical CuO hollow microspheres were hydrothermally prepared without use of any surfactants or templates. By controlling the formation reaction conditions and monitoring the relevant reaction processes using time-dependent experiments, it is demonstrated that hierarchical CuO microspheres with hollow interiors were formed through self-wrapping of a single layer of radically oriented CuO nanorods, and that hierarchical spheres could be tuned to show different morphologies and microstructures. As a consequence, the formation mechanism was proposed to proceed via a combined process of self-assembly and Ostwald's ripening. Further, these hollow microspheres were initiated as the anode material in lithium ion batteries, which showed excellent cycle performance and enhanced lithium storage capacity, most likely because of the synergetic effect of small diffusion lengths in building blocks of nanorods and proper void space that buffers the volume expansion. The strategy reported in this work is reproducible, which may help to significantly improve the electrochemical performance of transition metal oxide-based anode materials via designing the hollow structures necessary for developing lithium ion batteries and the relevant

Full Text Available The idea that complex systems have a hierarchical modular organization originates in the early 1960s and has recently attracted fresh support from quantitative studies of large scale, real-life networks. Here we investigate the hierarchical modular (or “modules-within-modules” decomposition of human brain functional networks, measured using functional magnetic resonance imaging (fMRI in 18 healthy volunteers under no-task or resting conditions. We used a customized template to extract networks with more than 1800 regional nodes, and we applied a fast algorithm to identify nested modular structure at several hierarchical levels. We used mutual information, 0 < I < 1, to estimate the similarity of community structure of networks in different subjects, and to identify the individual network that is most representative of the group. Results show that human brain functional networks have a hierarchical modular organization with a fair degree of similarity between subjects, I=0.63. The largest 5 modules at the highest level of the hierarchy were medial occipital, lateral occipital, central, parieto-frontal and fronto-temporal systems; occipital modules demonstrated less sub-modular organization than modules comprising regions of multimodal association cortex. Connector nodes and hubs, with a key role in inter-modular connectivity, were also concentrated in association cortical areas. We conclude that methods are available for hierarchical modular decomposition of large numbers of high resolution brain functional networks using computationally expedient algorithms. This could enable future investigations of Simon's original hypothesis that hierarchy or near-decomposability of physical symbol systems is a critical design feature for their fast adaptivity to changing environmental conditions.

In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.

Full Text Available Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains. However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge. We...

Full Text Available Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains. However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge. We...

Bioactive glasses with hierarchical nanoporosity and structures have been heavily involved in immobilization of enzymes. Because of meticulous design and ingenious hierarchical nanostructuration of porosities from yeast cell biotemplates, hierarchically nanostructured porous bioactive glasses can...... and products of catalytic reactions can freely diffuse through open mesopores (2–40 nm). The formation mechanism of hierarchically structured porous bioactive glasses, the immobilization mechanism of enzyme and the catalysis mechanism of immobilized enzyme are then discussed. The novel nanostructure...

Full Text Available This work describes an exploratory study, the first of the four phases of a more inclusive research, which aims at understanding the way to promote, in a Mathematics teachers’ group, a representational evolution leading to a practice that allows a Mathematical meaningful learning of Mathematics. The methodology of this study is qualitative. Data gathering was based on questioning; all the subjects of the sample (n=48 carried out a projective task (a hierarchical evocation test and answered a written individual questionnaire. Data analysis was based in a set of categories previously defined. The main purpose of this research was to identify, to characterize and to describe the representations of Mathematics, their teaching and learning, in a group of 48 subjects, from different social groups, in order to get indicators for the construction of the instruments to be used in to the next phases of the research. The main results of this study are the following: (1 we were able to identify and characterize different representations of the teaching and learning of Mathematics, in what respects its epistemological, pedagogical, emotional and sociocultural dimensions; (2 we were also able to identify limitations, difficulties and items to be included or rephrased in the instruments used.

A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation.

Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.

This paper examines forms of self-representation on YouTube with specific focus on Vlogs (Video blogs). The analytical scope of the paper is on how User-generated Content on YouTube initiates a certain kind of audiovisual representation and a particular interpretation of reality that can...... be distinguished within Vlogs. This will be analysed through selected case studies taken from a representative sample of empirically based observations of YouTube videos. The analysis includes a focus on how certain forms of representation can be identified as representations of the self (Turkle 1995, Scannell...... 1996, Walker 2005) and further how these forms must be comprehended within a context of technological constrains, institutional structures and social as well as economical practices on YouTube (Burgess and Green 2009, Van Dijck 2009). It is argued that these different contexts play a vital part...

U.S. Department of Health & Human Services — The SKR Project was initiated at NLM in order to develop programs to provide usable semantic representation of biomedical free text by building on resources...

Problems on the theory of group representations finding application in constructing the quantum variant of the inverse scattering problem are discussed. The multicomponent nonlinear Shroedinger equation is considered as a main example of nonlinear evolution equations (NEE)

This review article surveys recent work on computer representation of molecular surfaces. Several different algorithms are discussed for producing vector or raster drawings of space-filling models formed as the union of spheres. Other smoother surfaces are also considered

In this position paper we propose a consistent and unifying view to all those basic knowledge representation models that are based on the existence of two somehow opposite fuzzy concepts. A number of these basic models can be found in fuzzy logic and multi-valued logic literature. Here...... of the relationships between several existing knowledge representation formalisms, providing a basis from which more expressive models can be later developed....

We consider a general framework for integrable hierarchies in Lax form and derive certain universal equations from which 'functional representations' of particular hierarchies (such as KP, discrete KP, mKP, AKNS), i.e. formulations in terms of functional equations, are systematically and quite easily obtained. The formalism genuinely applies to hierarchies where the dependent variables live in a noncommutative (typically matrix) algebra. The obtained functional representations can be understood as 'noncommutative' analogues of 'Fay identities' for the KP hierarchy

This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.

In this work we study a growing network model with chaotic dynamical units that evolves using a local adaptive rewiring algorithm. Using numerical simulations we show that the model allows for the emergence of hierarchical networks. First, we show that the networks that emerge with the algorithm present a wide degree distribution that can be fitted by a power law function, and thus are scale-free networks. Using the LaNet-vi visualization tool we present a graphical representation that reveals a central core formed only by hubs, and also show the presence of a preferential attachment mechanism. In order to present a quantitative analysis of the hierarchical structure we analyze the clustering coefficient. In particular, we show that as the network grows the clustering becomes independent of system size, and also presents a power law decay as a function of the degree. Finally, we compare our results with a similar version of the model that has continuous non-linear phase oscillators as dynamical units. The results show that local interactions play a fundamental role in the emergence of hierarchical networks.

The elaborate performances characterizing natural materials result from functional hierarchical constructions at scales ranging from nanometres to millimetres, each construction allowing the material to fit the physical or chemical demands occurring at these different levels. Hierarchically structured materials start to demonstrate a high input in numerous promising applied domains such as sensors, catalysis, optics, fuel cells, smart biologic and cosmetic vectors. In particular, hierarchical hybrid materials permit the accommodation of a maximum of elementary functions in a small volume, thereby optimizing complementary possibilities and properties between inorganic and organic components. The reported strategies combine sol-gel chemistry, self-assembly routes using templates that tune the material's architecture and texture with the use of larger inorganic, organic or biological templates such as latex, organogelator-derived fibres, nanolithographic techniques or controlled phase separation. We propose an approach to forming transparent hierarchical hybrid functionalized membranes using in situ generation of mesostructured hybrid phases inside a non-porogenic hydrophobic polymeric host matrix. We demonstrate that the control of the multiple affinities existing between organic and inorganic components allows us to design the length-scale partitioning of hybrid nanomaterials with tuned functionalities and desirable size organization from ångström to centimetre. After functionalization of the mesoporous hybrid silica component, the resulting membranes have good ionic conductivity offering interesting perspectives for the design of solid electrolytes, fuel cells and other ion-transport microdevices.

In this work we combine hierarchical matrix techniques (Hackbusch, 1999) and domain decomposition methods to obtain fast and efficient algorithms for the solution of multiscale problems. This combination results in the hierarchical domain decomposition (HDD) method, which can be applied for solution multi-scale problems. Multiscale problems are problems that require the use of different length scales. Using only the finest scale is very expensive, if not impossible, in computational time and memory. Domain decomposition methods decompose the complete problem into smaller systems of equations corresponding to boundary value problems in subdomains. Then fast solvers can be applied to each subdomain. Subproblems in subdomains are independent, much smaller and require less computational resources as the initial problem.

structures affect translators’ approaches taken towards management ideas. This paper reports the findings from a longitudinal case study of the translation of Leadership Pipeline in a Danish fire department and how the translators’ approach changed over time from a modifying to a reproducing mode. The study......This study examines how translators in a hierarchical context approach the translation of management practices. Although current translation theory and research emphasize the importance of contextual factors in translation processes, little research has investigated how strongly hierarchical...... finds that translation does not necessarily imply transformation of the management idea, pointing instead to aspects of exact imitation and copying of an ”original” idea. It also highlights how translation is likely to involve multiple and successive translation modes and, furthermore, that strongly...

The distribution of galaxies has a hierarchical structure with power-law correlations. This is usually thought to arise from gravity alone acting on an originally uniform distributioon. If, however, the original process of galaxy formation occurs through the stimulated birth of one galaxy due to a nearby recently formed galaxy, and if this process occurs near its percolation threshold, then a hierarchical structure with power-law correlations arises at the time of galaxy formation. If subsequent gravitational evolution within an expanding cosmology is such as to retain power-law correlations, the initial r exp -1 dropoff can steepen to the observed r exp -1.8. The distribution of galaxies obtained by this process produces clustering and voids, as observed. 23 references

Many organisms incorporate inorganic solids in their tissues to enhance their functional, primarily mechanical, properties. These mineralized tissues, also called biominerals, are unique organo-mineral nanocomposites, organized at several hierarchical levels, from nano- to macroscale. Unlike man made composite materials, which often are simple physical blends of their components, the organic and inorganic phases in biominerals interface at the molecular level. Although these tissues are made of relatively weak components at ambient conditions, their hierarchical structural organization and intimate interactions between different elements lead to superior mechanical properties. Understanding basic principles of formation, structure and functional properties of these tissues might lead to novel bioinspired strategies for material design and better treatments for diseases of the mineralized tissues. This review focuses on general principles of structural organization, formation and functional properties of biominerals on the example the bone tissues. PMID:20827739

We study the influence of noise on information transmission in the form of packages shipped between nodes of hierarchical networks. Numerical simulations are performed for artificial tree networks, scale-free Ravasz-Barabási networks as well for a real network formed by email addresses of former Enron employees. Two types of noise are considered. One is related to packet dynamics and is responsible for a random part of packets paths. The second one originates from random changes in initial network topology. We find that the information transfer can be enhanced by the noise. The system possesses optimal performance when both kinds of noise are tuned to specific values, this corresponds to the Stochastic Resonance phenomenon. There is a non-trivial synergy present for both noisy components. We found also that hierarchical networks built of nodes of various degrees are more efficient in information transfer than trees with a fixed branching factor.

A quantum Ising chain with both the exchange couplings and the transverse fields arranged in a hierarchical way is considered. Exact analytical results for the critical line and energy gap are obtained. It is shown that when R 1 not= R 2 , where R 1 and R 2 are the hierarchical parameters for the exchange couplings and the transverse fields, respectively, the system undergoes a phase transition in a different universality class from the pure quantum Ising chain with R 1 =R 2 =1. On the other hand, when R 1 =R 2 =R, there exists a critical value R c dependent on the furcating number of the hierarchy. In case of R > R c , the system is shown to exhibit as Ising-like critical point with the critical behaviour the same as in the pure case, while for R c the system belongs to another universality class. (author). 19 refs, 2 figs

Full Text Available In model based development, embedded systems are modeled using a mix of dataflow formalism, that capture the flow of computation, and hierarchical state machines, that capture the modal behavior of the system. For safety analysis, existing approaches rely on a compilation scheme that transform the original model (dataflow and state machines into a pure dataflow formalism. Such compilation often result in loss of important structural information that capture the modal behaviour of the system. In previous work we have developed a compilation technique from a dataflow formalism into modular Horn clauses. In this paper, we present a novel technique that faithfully compile hierarchical state machines into modular Horn clauses. Our compilation technique preserves the structural and modal behavior of the system, making the safety analysis of such models more tractable.

Full Text Available Nowadays, large-scale wind power farms (WPFs bring new challenges for both electric systems and communication networks. Communication networks are an essential part of WPFs because they provide real-time control and monitoring of wind turbines from a remote location (local control center. However, different wind turbine applications have different requirements in terms of data volume, latency, bandwidth, QoS, etc. This paper proposes a hierarchical communication network architecture that consist of a turbine area network (TAN, farm area network (FAN, and control area network (CAN for offshore WPFs. The two types of offshore WPFs studied are small-scale WPFs close to the grid and medium-scale WPFs far from the grid. The wind turbines are modelled based on the logical nodes (LN concepts of the IEC 61400-25 standard. To keep pace with current developments in wind turbine technology, the network design takes into account the extension of the LNs for both the wind turbine foundation and meteorological measurements. The proposed hierarchical communication network is based on Switched Ethernet. Servers at the control center are used to store and process the data received from the WPF. The network architecture is modelled and evaluated via OPNET. We investigated the end-to-end (ETE delay for different WPF applications. The results are validated by comparing the amount of generated sensing data with that of received traffic at servers. The network performance is evaluated, analyzed and discussed in view of end-to-end (ETE delay for different link bandwidths.

To satisfy a broad range of control-analysis and data-acquisition requirements for Shiva, a hierarchical, computer-based, modular-distributed control system was designed. This system handles the more than 3000 control elements and 1000 data acquisition units in a severe high-voltage, high-current environment. The control system design gives one a flexible and reliable configuration to meet the development milestones for Shiva within critical time limits

This paper reports on the preliminary results obtained from the hierarchical glitch classification pipeline on LIGO data. The pipeline that has been under construction for the past year is now complete and end-to-end tested. It is ready to generate analysis results on a daily basis. The details of the pipeline, the classification algorithms employed and the results obtained with one days analysis on the gravitational wave and several auxiliary and environmental channels from all three LIGO detectors are discussed

Abstract. Present paper has been developed with the title of internet advertising effectiveness by using hierarchical model. Presenting the question: Today Internet is an important channel in marketing and advertising. The reason for this could be the ability of the Internet to reduce costs and people’s access to online services[1]. Also advertisers can easily access a multitude of users and communicate with them at low cost [9]. On the other hand, compared to traditional advertising, interne...

This paper develops a hierarchical agency model of deposit insurance. The main purpose is to undertake a game theoretic analysis of the consequences of deposit insurance schemes and their effects on monitoring incentives for banks. Using this simple framework, we analyze both risk- independent and risk-dependent premium schemes along with reserve requirement constraints. The results provide policymakers with not only a better understanding of the effects of deposit insurance on welfare and th...

In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

Treating the full protein structure is often neither computationally nor physically possible. Instead one is forced to consider various reduced models capturing the properties of interest. Previous work have used tubular neighborhoods of the C-alpha backbone. However, assigning a unique radius...... might not correctly capture volume exclusion - of crucial importance when trying to understand a proteins $3$d-structure. We propose a new reduced model treating the protein as a non-uniform tube with a radius reflecting the positions of atoms. The tube representation is well suited considering X......-ray crystallographic resolution ~ 3Å while a varying radius accounts for the different sizes of side chains. Such a non-uniform tube better capture the protein geometry and has numerous applications in structural/computational biology from the classification of protein structures to sequence-structure prediction....

It is thought that the gravitational clustering of galaxies in the universe may approach a scale-invariant, hierarchical form in the small separation, large-clustering regime. Past attempts to solve the Born-Bogoliubov-Green-Kirkwood-Yvon (BBGKY) hierarchy in this regime have assumed a certain separable hierarchical form for the higher order correlation functions of galaxies in phase space. It is shown here that such separable solutions to the BBGKY equations must satisfy the condition that the clustered component of the solution has cluster-cluster correlations equal to galaxy-galaxy correlations to all orders. The solutions also admit the presence of an arbitrary unclustered component, which plays no dyamical role in the large-clustering regime. These results are a particular property of the specific separable model assumed for the correlation functions in phase space, not an intrinsic property of spatially hierarchical solutions to the BBGKY hierarchy. The observed distribution of galaxies does not satisfy the required conditions. The disagreement between theory and observation may be traced, at least in part, to initial conditions which, if Gaussian, already have cluster correlations greater than galaxy correlations.

Full Text Available Eukaryotic life contains hierarchical vesicular architectures (i.e. organelles that are crucial for material production and trafficking, information storage and access, as well as energy production. In order to perform specific tasks, these compartments differ among each other in their membrane composition and their internal cargo and also differ from the cell membrane and the cytosol. Man-made structures that reproduce this nested architecture not only offer a deeper understanding of the functionalities and evolution of organelle-bearing eukaryotic life but also allow the engineering of novel biomimetic technologies. Here, we show the newly developed vesicle-in-water-in-oil emulsion transfer preparation technique to result in giant unilamellar vesicles internally compartmentalized by unilamellar vesicles of different membrane composition and internal cargo, i.e. hierarchical unilamellar vesicles of controlled compositional heterogeneity. The compartmentalized giant unilamellar vesicles were subsequently isolated by a separation step exploiting the heterogeneity of the membrane composition and the encapsulated cargo. Due to the controlled, efficient, and technically straightforward character of the new preparation technique, this study allows the hierarchical fabrication of compartmentalized giant unilamellar vesicles of controlled compositional heterogeneity and will ease the development of eukaryotic cell mimics that resemble their natural templates as well as the fabrication of novel multi-agent drug delivery systems for combination therapies and complex artificial microreactors.

The lithium-ion battery (LIB) is one of the most promising power sources to be deployed in electric vehicles, including solely battery powered vehicles, plug-in hybrid electric vehicles, and hybrid electric vehicles. With the increasing demand for devices of high-energy densities (>500 Wh kg −1 ), new energy storage systems, such as lithium–oxygen (Li–O 2 ) batteries and other emerging systems beyond the conventional LIB, have attracted worldwide interest for both transportation and grid energy storage applications in recent years. It is well known that the electrochemical performance of these energy storage systems depends not only on the composition of the materials, but also on the structure of the electrode materials used in the batteries. Although the desired performance characteristics of batteries often have conflicting requirements with the micro/nano-structure of electrodes, hierarchically designed electrodes can be tailored to satisfy these conflicting requirements. This work will review hierarchically structured materials that have been successfully used in LIB and Li–O 2 batteries. Our goal is to elucidate (1) how to realize the full potential of energy materials through the manipulation of morphologies, and (2) how the hierarchical structure benefits the charge transport, promotes the interfacial properties and prolongs the electrode stability and battery lifetime. (paper)

The performance of multifunctional porous ceramics is often hindered by the seemingly contradictory effects of porosity on both mechanical and non-structural properties and yet a sufficient body of knowledge linking microstructure to these properties does not exist. Using a combination of tailored anisotropic and hierarchical materials, these disparate effects may be reconciled. In this project, a systematic investigation of the processing, characterization and properties of anisotropic and isotropic hierarchically porous ceramics was conducted. The system chosen was a composite ceramic intended as the cathode for a solid oxide fuel cell (SOFC). Comprehensive processing investigations led to the development of approaches to make hierarchical, anisotropic porous microstructures using directional freeze-casting of well dispersed slurries. The effect of all the important processing parameters was investigated. This resulted in an ability to tailor and control the important microstructural features including the scale of the microstructure, the macropore size and total porosity. Comparable isotropic porous ceramics were also processed using fugitive pore formers. A suite of characterization techniques including x-ray tomography and 3-D sectional scanning electron micrographs (FIB-SEM) was used to characterize and quantify the green and partially sintered microstructures. The effect of sintering temperature on the microstructure was quantified and discrete element simulations (DEM) were used to explain the experimental observations. Finally, the comprehensive mechanical properties, at room temperature, were investigated, experimentally and using DEM, for the different microstructures.

In many types of disordered systems which exhibit frustration and competition, an ultrametric topology is found to exist in the space of allowable states. This ultrametric topology of states is associated with a hierarchical relaxation process called ultradiffusion. Ultradiffusion occurs in hierarchical non-linear (HNL) dynamical systems when constraints cause large scale, slow modes of motion to be subordinated to small scale, fast modes. Examples of ultradiffusion are found throughout condensed matter physics and critical phenomena (e.g. the states of spin glasses), in biophysics (e.g. the states of Hopfield networks) and in many other fields including layered computing based upon nonlinear dynamics. The statistical dynamics of ultradiffusion can be treated as a random walk on an ultrametric space. For reversible bifurcating ultrametric spaces the evolution equation governing the probability of a particle being found at site i at time t has a highly degenerate transition matrix. This transition matrix has a fractal geometry similar to the replica form proposed for spin glasses. The authors invert this fractal matrix using a recursive quad-tree (QT) method. Possible applications of hierarchical systems to communications and symbolic computing are discussed briefly

to understand the biosecurity social representations by primary care nursing professionals and analyze how they articulate with quality of care. exploratory and qualitative research based on social representation theory. The study participants were 36 nursing workers from primary health care in a state capital in the Northeast region of Brazil. The data were analyzed by descending hierarchical classification. five classes were obtained: occupational accidents suffered by professionals; occupational exposure to biological agents; biosecurity management in primary health care; the importance of personal protective equipment; and infection control and biosecurity. the different positions taken by the professionals seem to be based on a field of social representations related to the concept of biosecurity, namely exposure to accidents and risks to which they are exposed. However, occupational accidents are reported as inherent to the practice.

Full Text Available Advances in science and technology have influenced designing activity in architecture throughout its history. Observing the fundamental changes to architectural designing due to the substantial influences of the advent of the computing era, we now witness our design environment gradually changing from conventional pencil and paper to digital multi-media. Although designing is considered to be a unique human activity, there has always been a great dependency on design aid tools. One of the greatest aids to architectural design, amongst the many conventional and widely accepted computational tools, is the computer-aided object modeling and rendering tool, commonly known as a CAD package. But even though conventional modeling tools have provided designers with fast and precise object handling capabilities that were not available in the pencil-and-paper age, they normally show weaknesses and limitations in covering the whole design process.In any kind of design activity, the design worked on has to be represented in some way. For a human designer, designs are for example represented using models, drawings, or verbal descriptions. If a computer is used for design work, designs are usually represented by groups of pixels (paintbrush programs, lines and shapes (general-purpose CAD programs or higher-level objects like ‘walls’ and ‘rooms’ (purpose-specific CAD programs.A human designer usually has a large number of representations available, and can use the representation most suitable for what he or she is working on. Humans can also introduce new representations and thereby represent objects that are not part of the world they experience with their sensory organs, for example vector representations of four and five dimensional objects. In design computing on the other hand, the representation or representations used have to be explicitly defined. Many different representations have been suggested, often optimized for specific design domains

This article deals with the equivalence of representations of behaviors of linear differential systems In general. the behavior of a given linear differential system has many different representations. In this paper we restrict ourselves to kernel representations and image representations Two kernel

Full Text Available Semiotics is widely applied in theories of information. Following the original triadic characterization of reality by Peirce, the linguistic processes involved in information—production, transmission, reception, and understanding—would all appear to be interpretable in terms of signs and their relations to their objects. Perhaps the most important of these relations is that of the representation-one, entity, standing for or representing some other. For example, an index—one of the three major kinds of signs—is said to represent something by being directly related to its object. My position, however, is that the concept of symbolic representations having such roles in information, as intermediaries, is fraught with the same difficulties as in representational theories of mind. I have proposed an extension of logic to complex real phenomena, including mind and information (Logic in Reality; LIR, most recently at the 4th International Conference on the Foundations of Information Science (Beijing, August, 2010. LIR provides explanations for the evolution of complex processes, including information, that do not require any entities other than the processes themselves. In this paper, I discuss the limitations of the standard relation of representation. I argue that more realistic pictures of informational systems can be provided by reference to information as an energetic process, following the categorial ontology of LIR. This approach enables naïve, anti-realist conceptions of anti-representationalism to be avoided, and enables an approach to both information and meaning in the same novel logical framework.

A major goal of metal-organic framework (MOF) research is the expansion of pore size and volume. Although many approaches have been attempted to increase the pore size of MOF materials, it is still a challenge to construct MOFs with precisely customized pore apertures for specific applications. Herein, we present a new method, namely linker labilization, to increase the MOF porosity and pore size, giving rise to hierarchical-pore architectures. Microporous MOFs with robust metal nodes and pro-labile linkers were initially synthesized. The mesopores were subsequently created as crystal defects through the splitting of a pro-labile-linker and the removal of the linker fragments by acid treatment. We demonstrate that linker labilization method can create controllable hierarchical porous structures in stable MOFs, which facilitates the diffusion and adsorption process of guest molecules to improve the performances of MOFs in adsorption and catalysis.

We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

This study examines women's social representations of female orgasm. Fifty semi-structured interviews were conducted with British women. The data were thematically analysed and compared with the content of female orgasm-related writing in two women's magazines over a 30-year period. The results indicate that orgasm is deemed the goal of sex with emphasis on its physiological dimension. However, the women and the magazines graft onto this scientifically driven representation the importance of relational and emotive aspects of orgasm. For the women, particularly those who experience themselves as having problems with orgasm, the scientifically driven representations induce feelings of failure, but are also resisted. The findings highlight the role played by the social context in women's subjective experience of their sexual health.

This book is an introduction to the representation theory of quivers and finite dimensional algebras. It gives a thorough and modern treatment of the algebraic approach based on Auslander-Reiten theory as well as the approach based on geometric invariant theory. The material in the opening chapters is developed starting slowly with topics such as homological algebra, Morita equivalence, and Gabriel's theorem. Next, the book presents Auslander-Reiten theory, including almost split sequences and the Auslander-Reiten transform, and gives a proof of Kac's generalization of Gabriel's theorem. Once this basic material is established, the book goes on with developing the geometric invariant theory of quiver representations. The book features the exposition of the saturation theorem for semi-invariants of quiver representations and its application to Littlewood-Richardson coefficients. In the final chapters, the book exposes tilting modules, exceptional sequences and a connection to cluster categories. The book is su...

This is a brief report on the preon models which are investigated by In-Gyu Koh, A. N. Schellekens and myself and based on complex, anomaly-free and asymptotically free representations of SU(3) to SU(8), SO(4N+2) and E 6 with no more than two different preons. Complete list of the representations that are complex anomaly-free and asymptotically free has been given by E. Eichten, I.-G. Koh and myself. The assumptions made about the ground state composites and the role of Fermi statistics to determine the metaflavor wave functions are discussed in some detail. We explain the method of decompositions of tensor products with definite permutation properties which has been developed for this purpose by I.-G. Koh, A.N. Schellekens and myself. An example based on an anomaly-free representation of the confining metacolor group SU(5) is discussed

In a multistage experiment, twelve 4- and 9-year-old children participated in a triad rating task. Their ratings were mapped with multidimensional scaling, from which euclidean distances were computed to operationalize semantic distance between items in target pairs. These children and age-mates then participated in an experiment that employed these target pairs in a story, which was followed by a misinformation manipulation. Analyses linked individual and developmental differences in suggestibility to children's representations of the target items. Semantic proximity was a strong predictor of differences in suggestibility: The closer a suggested distractor was to the original item's representation, the greater was the distractor's suggestive influence. The triad participants' semantic proximity subsequently served as the basis for correctly predicting memory performance in the larger group. Semantic proximity enabled a priori counterintuitive predictions of reverse age-related trends to be confirmed whenever the distance between representations of items in a target pair was greater for younger than for older children.

Full Text Available Digital instruments and technologies enrich architectonical representation and communication opportunities. Computer graphics is organized according the two phases of visualization and construction, that is modeling and rendering, structuring dichotomy of software technologies. Visualization modalities give different kinds of representations of the same 3D model and instruments produce a separation between drawing and image’s creation. Reverse modeling can be related to a synthesis process, ‘direct modeling’ follows an analytic procedure. The difference between interactive and not interactive applications is connected to the possibilities offered by informatics instruments, and relates to modeling and rendering. At the same time the word ‘model’ describes different phenomenon (i.e. files: mathematical model of the building and of the scene; raster representation and post-processing model. All these correlated different models constitute the architectonical interpretative model, that is a simulation of reality made by the model for improving the knowledge.

If one has a unitary representation ρ: π → U(H) of the fundamental group π 1 (M) of the manifold M then one can do may useful things: 1. To construct a natural vector bundle over M; 2. To construct the cohomology groups with respect to the local system of coefficients; 3. To construct the signature of manifold M with respect to the local system of coefficients; and others. In particular, one can write the Hirzebruch formula which compares the signature with the characteristic classes of the manifold M, further based on this, find the homotopy invariant characteristic classes (i.e. the Novikov conjecture). Taking into account that the family of known representations is not sufficiently large, it would be interesting to extend this family to some larger one. Using the ideas of A.Connes, M.Gromov and H.Moscovici a proper notion of asymptotical representation is defined. (author). 7 refs

Full Text Available Sinhababu’s Humean Nature contains many interesting and important ideas, but in this short commentary I focus on the idea of vivid representations. Sinhababu inherits his idea of vivid representations from Hume’s discussions, in particular his discussion of calm and violent passions. I am sympathetic to the idea of developing Hume’s insight that has been largely neglected by philosophers. I believe that Sinhababu and Hume are on the right track. What I do in this short commentary is to raise some questions about the details. The aim of asking these questions is not to challenge Sinhababu’s proposal (at least his main ideas, but rather to point at some interesting issues arising out of his proposal. The questions are about (1 the nature of vividness, (2 the effects of vivid representations, and (3 Sinhababu’s account of alief cases.

An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

Naive Bayes classifiers tend to perform very well on a large number of problem domains, although their representation power is quite limited compared to more sophisticated machine learning algorithms. In this pa- per we study combining multiple naive Bayes classifiers by using the hierar- chical

This paper examines forms of self-representation on YouTube with specific focus on Vlogs (Video blogs). The analytical scope of the paper is on how User-generated Content on YouTube initiates a certain kind of audiovisual representation and a particular interpretation of reality that can be distinguished within Vlogs. This will be analysed through selected case studies taken from a representative sample of empirically based observations of YouTube videos. The analysis includes a focus on how ...

Recording knowledge in a common framework that would make it possible to seamlessly share global knowledge remains an important challenge for researchers. This brief examines several ideas about the representation of knowledge addressing this challenge. A widespread general agreement is followed that states uniform knowledge representation should be achievable by using ontologies populated with concepts. A separate chapter is dedicated to each of the three introduced topics, following a uniform outline: definition, organization, and use. This brief is intended for those who want to get to know

of an experiment. Qualitative micro-analyses of the group interactions motivate a taxonomy of different roles that the material representations play in the joint epistemic processes: illustration, elaboration and exploration. Firstly, the LEGO blocks were used to illustrate already well-formed ideas in support......-down and bottom-up cognitive processes and division of cognitive labor.......How do material representations such as models, diagrams and drawings come to shape and aid collective, epistemic processes? This study investigated how groups of participants spontaneously recruited material objects (in this case LEGO blocks) to support collective creative processes in the context...

In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

We consider the problem of distributing the proceeds generated from a joint venture in which the participating agents are hierarchically organized. We introduce and characterize a family of allocation rules where revenue ‘bubbles up’ in the hierarchy. The family is flexible enough to accommodate...... the no-transfer rule (where no revenue bubbles up) and the full-transfer rule (where all the revenues bubble up to the top of the hierarchy). Intermediate rules within the family are reminiscent of popular incentive mechanisms for social mobilization or multi-level marketing....

There are growing needs for quick preview of video contents for the purpose of improving accessibility of video archives as well as reducing network traffics. In this paper, a storyboard that contains a user-specified number of keyframes is produced from a given video sequence. It is based on hierarchical cluster analysis of feature vectors that are derived from wavelet coefficients of video frames. Consistent use of extracted feature vectors is the key to avoid a repetition of computationally-intensive parsing of the same video sequence. Experimental results suggest that a significant reduction in computational time is gained by this strategy.

networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

of Technology, Singapore. The coordination control among multiple dc sources and energy storages is implemented using a novel hierarchical control technique. The bus voltage essentially acts as an indicator of supply-demand balance. A wireless control is implemented for the reliable operation of the grid....... A reasonable compromise between the maximum power harvest and effective battery management is further enhanced using the coordination control based on a central energy management system. The feasibility and effectiveness of the proposed control strategies have been tested by a dc microgrid in WERL....

of a particular first order reliability method (FORM) was first described in a celebrated paper by Rackwitz and Fiessler more than a quarter of a century ago. The method has become known as the Rackwitz-Fiessler algorithm. The original RF-algorithm as applied to a hierarchical random variable model...... is recapitulated so that a simple but quite effective accuracy improving calculation can be explained. A limit state curvature correction factor on the probability approximation is obtained from the final stop results of the RF-algorithm. This correction factor is based on Breitung’s asymptotic formula for second...

Additive manufacturing has become a tool of choice for the development of customizable components. Developments in this technology have led to a powerful array of printers that t serve a variety of needs. However, resin development plays a crucial role in leading the technology forward. This paper addresses the development and application of printing hierarchical porous structures. Beginning with the development of a porous scaffold, which can be functionalized with a variety of materials, and concluding with customized resins for metal, ceramic, and carbon structures.

Preliminary results indicate that flow in the saturated zone at Yucca Mountain is controlled by fractures. A current conceptual model assumes that the flow in the fracture system can be approximately by a three-dimensionally interconnected network of linear conduits. The overall flow system of rocks at Yucca Mountain is considered to consist of hierarchically structured heterogeneous fracture systems of multiple scales. A case study suggests that it is more appropriate to use the flow parameters of the large fracture system for predicting the first arrival time, rather than using the bulk average parameters of the total system

Full Text Available The model presented in this paper is based on the model developed by Billionnet for the hierarchical workforce problem. In Billionnet’s Model, while determining the workers’ weekly costs, weekly working hours of workers are not taken into consideration. In our model, the weekly costs per worker are reduced in proportion to the working hours per week. Our model is illustrated on the Billionnet’s Example. The models in question are compared and evaluated on the basis of the results obtained from the example problem. A reduction is achieved in the total cost by the proposed model.

A fast and efficient technique for hierarchical clustering of samples in a dataset includes compressing the dataset to reduce a number of variables within each of the samples of the dataset. A nearest neighbor matrix is generated to identify nearest neighbor pairs between the samples based on differences between the variables of the samples. The samples are arranged into a hierarchy that groups the samples based on the nearest neighbor matrix. The hierarchy is rendered to a display to graphically illustrate similarities or differences between the samples.

The synthesis and energy storage application of hierarchical porous carbons with size ranging from nano-to micrometres has attracted considerable attention all over the world. Exploring eco-friendly and reliable synthesis of hierarchical porous carbons for supercapacitors with high energy density and high power is still of ongoing challenge. In this work, we report the design and synthesis of super-hierarchical porous carbons with highly developed porosity by a stepwise removal strategy for high-rate supercapacitors. The mixed biomass wastes of coconut shell and sewage sludge are employed as raw material. The as-prepared super-hierarchical porous carbons present high surface areas (3003 m2 g-1), large pore volume (2.04 cm3 g-1), appropriate porosity, and outstanding electrochemical performance. The dependence of electrochemical performance on structural, textural, and functional properties of carbons engineered by various synthesis strategies is investigated in detail. Moreover, the as-assembled symmetrical supercapacitor exhibits high energy density of 25.4 Wh kg-1 at a power density of 225 W kg-1 and retains 20.7 Wh kg-1 even at a very high power of 9000 W kg-1. This work provides an environmentally benign strategy and new insights to efficiently regulate the porosity of hierarchical porous carbons derived from biomass wastes for energy storage applications.

Full Text Available The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence, and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step towards addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.

The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual-tactile synchrony , and visual-proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.

The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings. PMID:27826275

We present a method to construct induced representations of quantum algebras which have a bicrossproduct structure. We apply this procedure to some quantum kinematical algebras in (1+1) dimensions with this kind of structure: null-plane quantum Poincare algebra, non-standard quantum Galilei algebra and quantum κ-Galilei algebra

In this paper I introduce (1) a technically simple and highly theory-independent way for lexically representing flexible idiomatic expressions, and (2) a procedure to incorporate these lexical representations in a wide variety of NLP systems. The method is based on Structural EQuivalence Classes

We discuss implications of the following statement about the representation theory of symmetric groups: every integer appears infinitely often as an irreducible character evaluation, and every nonnegative integer appears infinitely often as a Littlewood-Richardson coefficient and as a Kronecker coefficient.

One of the main areas in knowledge representation and logic-based artificial intelligence concerns logical formalisms that can be used for representing and reasoning with concepts. For almost 30 years, since research in this area began, the issue of intensionality has had a special status...

Keeping in mind the important role of octonion algebra, we have obtained the electromagnetic field equations of dyons with an octonionic 8 x 8 matrix representation. In this paper, we consider the eight - dimensional octonionic space as a combination of two (external and internal) four-dimensional spaces for the existence of magnetic monopoles (dyons) in a higher-dimensional formalism. As such, we describe the octonion wave equations in terms of eight components from the 8 x 8 matrix representation. The octonion forms of the generalized potential, fields and current source of dyons in terms of 8 x 8 matrix are discussed in a consistent manner. Thus, we have obtained the generalized Dirac-Maxwell equations of dyons from an 8x8 matrix representation of the octonion wave equations in a compact and consistent manner. The generalized Dirac-Maxwell equations are fully symmetric Maxwell equations and allow for the possibility of magnetic charges and currents, analogous to electric charges and currents. Accordingly, we have obtained the octonionic Dirac wave equations in an external field from the matrix representation of the octonion-valued potentials of dyons.

This book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own

The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.

Humans have a tendency to perceive motion even in static images that simply "imply" movement. This tendency is so strong that our memory for actions depicted in static images is distorted in the direction of implied motion--a phenomenon known as representational momentum (RM). In the present study, we created an RM display depicting a pattern of…

Research on the representation of generic knowledge suggests that inherent properties can have either a principled or a causal connection to a kind. The type of connection determines whether the outcome of the storytelling process will include intuitions of inevitability and a normative dimension and whether it will ground causal explanations.

Many of the world's most populous democracies are political unions composed of states or provinces that are unequally represented in the national legislature. Scattered empirical studies, most of them focusing on the United States, have discovered that overrepresented states appear to receive larger shares of the national budget. Although this relationship is typically attributed to bargaining advantages associated with greater legislative representation, an important threat to empirical identification stems from the fact that the representation scheme was chosen by the provinces. Thus, it is possible that representation and fiscal transfers are both determined by other characteristics of the provinces in a specific country. To obtain an improved estimate of the relationship between representation and redistribution, we collect and analyze provincial-level data from nine federations over several decades, taking advantage of the historical process through which federations formed and expanded. Controlling for a variety of country- and province-level factors and using a variety of estimation techniques, we show that overrepresented provinces in political unions around the world are rather dramatically favored in the distribution of resources.

In its ground state representation, the infinite, spin 1/2 Heisenberg chain provides a model for spin wave scattering, which entails many features of the quantum mechanical N-body problem. Here, we give a complete eigenfunction expansion for the Hamiltonian of the chain in this representation, for all numbers of spin waves. Our results resolve the questions of completeness and orthogonality of the eigenfunctions given by Bethe for finite chains, in the infinite volume limit. (orig.) [de

At present PASI system of scoring is used for evaluating erythema severity, which can help doctors to diagnose psoriasis [1-3]. The system relies on the subjective judge of doctors, where the accuracy and stability cannot be guaranteed [4]. This paper proposes a stable and precise algorithm for erythema severity estimation. Our contributions are twofold. On one hand, in order to extract the multi-scale redness of erythema, we design the hierarchical feature. Different from traditional methods, we not only utilize the color statistical features, but also divide the detect window into small window and extract hierarchical features. Further, a feature re-ranking step is introduced, which can guarantee that extracted features are irrelevant to each other. On the other hand, an adaptive boosting classifier is applied for further feature selection. During the step of training, the classifier will seek out the most valuable feature for evaluating erythema severity, due to its strong learning ability. Experimental results demonstrate the high precision and robustness of our algorithm. The accuracy is 80.1% on the dataset which comprise 116 patients' images with various kinds of erythema. Now our system has been applied for erythema medical efficacy evaluation in Union Hosp, China.

This paper explores the use of hierarchical structure for diagnosis of vocal fold disorders. The hierarchical structure is initially used to train different second-level classifiers. At the first level normal and pathological signals have been distinguished. Next, pathological signals have been classified into neurogenic and organic vocal fold disorders. At the final level, vocal fold nodules have been distinguished from polyps in organic disorders category. For feature selection at each level of hierarchy, the reconstructed signal at each wavelet packet decomposition sub-band in 5 levels of decomposition with mother wavelet of (db10) is used to extract the nonlinear features of self-similarity and approximate entropy. Also, wavelet packet coefficients are used to measure energy and Shannon entropy features at different spectral sub-bands. Davies-Bouldin criterion has been employed to find the most discriminant features. Finally, support vector machines have been adopted as classifiers at each level of hierarchy resulting in the diagnosis accuracy of 92%.

The idea of a hierachically structured cosmos can be traced back to the Presocratic Hellada. In the fifth century BC Anaxagoras from Clazomenae developed an idea of a sort of fractal material world, by introducing the concept of seeds (spermata), or homoeomeries as Aristotle dubbed it later (Grujić 2001). Anaxagoras ideas have been grossly neglected during the Middle Ages, to be invoked by a number of post-Renaissance thinkers, like Leibniz, Kant, etc, though neither of them referred to their Greek predecessor. But the real resurrections of the hierarchical paradigm started at the beginning of the last century, with Fournier and Charlier (Grujić 2002). Second half of the 20th century witnessed an intensive development of the theoretical models based on the (multi)fractal paradigm, as well as a considerable body of the observational evidence in favour of the hierarchical cosmos (Saar 1988). We overview the state of the art of the cosmological fractal concept, both within the astrophysical (Sylos Labini et al 1998), methodological (Ribeiro 2001) and epistemological (Ribeiro and Videira 1998) context.

The Self-Defining Data System (SDS) is a system which allows the creation of self-defining hierarchical data structures in a form which allows the data to be moved between different machine architectures. Because the structures are self-defining they can be used for communication between independent modules in a distributed system. Unlike disk-based hierarchical data systems such as Starlink's HDS, SDS works entirely in memory and is very fast. Data structures are created and manipulated as internal dynamic structures in memory managed by SDS itself. A structure may then be exported into a caller supplied memory buffer in a defined external format. This structure can be written as a file or sent as a message to another machine. It remains static in structure until it is reimported into SDS. SDS is written in portable C and has been run on a number of different machine architectures. Structures are portable between machines with SDS looking after conversion of byte order, floating point format, and alignment. A Fortran callable version is also available for some machines.

Full Text Available The spatial arrangement of urban hubs and centers and how individuals interact with these centers is a crucial problem with many applications ranging from urban planning to epidemiology. We utilize here in an unprecedented manner the large scale, real-time 'Oyster' card database of individual person movements in the London subway to reveal the structure and organization of the city. We show that patterns of intraurban movement are strongly heterogeneous in terms of volume, but not in terms of distance travelled, and that there is a polycentric structure composed of large flows organized around a limited number of activity centers. For smaller flows, the pattern of connections becomes richer and more complex and is not strictly hierarchical since it mixes different levels consisting of different orders of magnitude. This new understanding can shed light on the impact of new urban projects on the evolution of the polycentric configuration of a city and the dense structure of its centers and it provides an initial approach to modeling flows in an urban system.

Dilemmas in cooperation are one of the major concerns in game theory. In a public goods game, each individual cooperates by paying a cost or defecting without paying it, and receives a reward from the group out of the collected cost. Thus, defecting is beneficial for each individual, while cooperation is beneficial for the group. Now, groups (say, countries) consisting of individuals also play games. To study such a multi-level game, we introduce a hierarchical game in which multiple groups compete for limited resources by utilizing the collected cost in each group, where the power to appropriate resources increases with the population of the group. Analyzing this hierarchical game, we found a hierarchical prisoner’s dilemma, in which groups choose the defecting policy (say, armament) as a Nash strategy to optimize each group’s benefit, while cooperation optimizes the total benefit. On the other hand, for each individual, refusing to pay the cost (say, tax) is a Nash strategy, which turns out to be a cooperation policy for the group, thus leading to a hierarchical dilemma. Here the group reward increases with the group size. However, we find that there exists an optimal group size that maximizes the individual payoff. Furthermore, when the population asymmetry between two groups is large, the smaller group will choose a cooperation policy (say, disarmament) to avoid excessive response from the larger group, and the prisoner’s dilemma between the groups is resolved. Accordingly, the relevance of this hierarchical game on policy selection in society and the optimal size of human or animal groups are discussed.

Many complex pathways are described as hierarchical structures in which a pathway is recursively partitioned into several sub-pathways, and organized hierarchically as a tree. The hierarchical structure provides a natural way to visualize the global structure of a complex pathway. However, none of the previous research on pathway visualization explores the hierarchical structures provided by many complex pathways. In this paper, we aim to develop algorithms that can take advantages of hierarchical structures, and give layouts that explore the global structures as well as local structures of pathways. We present a new hierarchically organized layout algorithm to produce layouts for hierarchically organized pathways. Our algorithm first decomposes a complex pathway into sub-pathway groups along the hierarchical organization, and then partition each sub-pathway group into basic components. It then applies conventional layout algorithms, such as hierarchical layout and force-directed layout, to compute the layout of each basic component. Finally, component layouts are joined to form a final layout of the pathway. Our main contribution is the development of algorithms for decomposing pathways and joining layouts. Experiment shows that our algorithm is able to give comprehensible visualization for pathways with hierarchies, cycles as well as complex structures. It clearly renders the global component structures as well as the local structure in each component. In addition, it runs very fast, and gives better visualization for many examples from previous related research. 2009 Elsevier B.V. All rights reserved.

Most acquisition contracts within the oil and gas industry consist of representations and warranties. The legal distinction between representations and warranties was explained as follows: a representation is a statement of fact made by the representor before making the contract, but a warranty is a statement of fact which forms part of the terms of the contract. The paper outlines the nature of a representation or warranty and explains why certain warranties are not given. The protection offered by representations and warranties in breach of contract cases is also explained. Suggestions are offered for increasing protection by representations and warranties. 22 refs

Full Text Available Objective: the study aims to analyze the social representations on the ethical and bioethical aspects in the research elaborated by academics of the Dentistry Course. Methods: it is a qualitative research based on the Theory of Social Representations carried out with 80 academics of the Dentistry course. The data were collected through a semi-structured interview script, processed in the IRaMuTeQ and analyzed by the Descending Hierarchical Classification. The study followed the ethical standards recommended by Resolution n. 466/2012, obtaining approval from the Ethics Committee of UNINOVAFAPI University Center. Results: The corpus analyzed in the study is composed of 79 units of initial context (UCI with use of 62%. The results are presented in four classes, namely: 4. The understanding of Ethics and Bioethics in research; 3. Researcher's social position; 1. Legal responsibilities of the researcher and 2. Normative aspects of research ethics - legal basis. Conclusion: Scholars represent ethical and bioethical aspects in research as essential to respect human dignity and protect the lives of research participants, with a focus on normative aspects of research ethics through Research Committees. Their attitudes are guided by their conditions of life, their beliefs and cultures of different social contexts. Keywords: Bioethics, ethics, social psychology.

Historically, operator theory and representation theory both originated with the advent of quantum mechanics. The interplay between the subjects has been and still is active in a variety of areas.This volume focuses on representations of the universal enveloping algebra, covariant representations in general, and infinite-dimensional Lie algebras in particular. It also provides new applications of recent results on integrability of finite-dimensional Lie algebras. As a central theme, it is shown that a number of recent developments in operator algebras may be handled in a particularly e

, bidirectionality, and self-relevance. The predictions were tested in altogether six experiments, using the same basic methodology. Two sessions were held with each participant. In a pilot session, a set of conventional MEC representations was elicited from each participant using the laddering technique. From......Despite its popularity in consumer research, means-end chain (MEC) theory suffers from problems of unconfirmed validity. Theoretically, MECs can be cast as associative networks with a three-layered structure that should exhibit four properties: hierarchicity, automatic spreading activation...

The authors propose a new method for the analysis of brain activation images that aims at detecting activated volumes rather than pixels. The method is based on Poisson process modeling, hierarchical description, and multiscale detection (MSD). Its performances have been assessed using both Monte Carlo simulated images and experimental PET brain activation data. As compared to other methods, the MSD approach shows enhanced sensitivity with a controlled overall type I error, and has the ability to provide an estimate of the spatial limits of the detected signals. It is applicable to any kind of difference image for which the spatial autocorrelation function can be approximated by a stationary Gaussian function

An efficient technique for the analysis of electromagnetic scattering by arbitrary shaped inhomogeneous dielectric objects is presented. The technique is based on a higher-order method of moments (MoM) solution of the volume integral equation. This higher-order MoM solution comprises recently...... that the condition number of the resulting MoM matrix is reduced by several orders of magnitude in comparison to existing higher-order hierarchical basis functions and, consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement...

This descriptive qualitative study had the following objectives: identify the content and structure of social representations of quality of life and AIDS for persons living with the disease and analyze the structural relations between such representations. The sample included 103 persons with HIV in a municipality (county) in northern Rio de Janeiro State, Brazil. The methodology used free and hierarchical recall of words for the inductive terms "AIDS" and "quality of life for persons with AIDS", with analysis by the EVOC software. The probable core representation of AIDS was identified as: prejudice, treatment, family, and medications, with the same components identified for quality of life, plus healthy diet and work. We thus elaborated the hypothesis of joint, coordinated representational interaction, fitting the representations together, with implications for the symbolic grasp and quality of life for persons living with HIV. The findings provide backing for collective and individual health approaches to improve quality of life in this group.

In loop quantum cosmology, one has to make a choice of SU(2) irreducible representation in which to compute holonomies and regularize the curvature of the connection. The systematic choice made in the literature is to work in the fundamental representation, and very little is known about the physics associated with higher spin labels. This constitutes an ambiguity of which the understanding, we believe, is fundamental for connecting loop quantum cosmology to full theories of quantum gravity like loop quantum gravity, its spin foam formulation, or cosmological group field theory. We take a step in this direction by providing here a new closed formula for the Hamiltonian of flat Friedmann-Lemaître-Robertson-Walker models regularized in a representation of arbitrary spin. This expression is furthermore polynomial in the basic variables which correspond to well-defined operators in the quantum theory, takes into account the so-called inverse-volume corrections, and treats in a unified way two different regularization schemes for the curvature. After studying the effective classical dynamics corresponding to single and multiple-spin Hamiltonians, we study the behavior of the critical density when the number of representations is increased and the stability of the difference equations in the quantum theory.

Full Text Available Recognizing human actions in video sequences has been a challenging problem in the last few years due to its real-world applications. A lot of action representation approaches have been proposed to improve the action recognition performance. Despite the popularity of local features-based approaches together with “Bag-of-Words” model for action representation, it fails to capture adequate spatial or temporal relationships. In an attempt to overcome this problem, a trajectory-based local representation approaches have been proposed to capture the temporal information. This paper introduces an improvement of trajectory-based human action recognition approaches to capture discriminative temporal relationships. In our approach, we extract trajectories by tracking the detected spatio-temporal interest points named “cuboid features” with matching its SIFT descriptors over the consecutive frames. We, also, propose a linking and exploring method to obtain efficient trajectories for motion representation in realistic conditions. Then the volumes around the trajectories’ points are described to represent human actions based on the Bag-of-Words (BOW model. Finally, a support vector machine is used to classify human actions. The effectiveness of the proposed approach was evaluated on three popular datasets (KTH, Weizmann and UCF sports. Experimental results showed that the proposed approach yields considerable performance improvement over the state-of-the-art approaches.

A novel hierarchical function action behavior mechanism (FABM) modeling framework is proposed to conduct intelligent mapping from the overall function to the principle solution, according to the requirements of customers. Based on the hierarchical modeling framework, an object oriented representation method is developed to express the inheritance and the interconnecting characteristics between any two objects. In addition, the rules of expansion and modification in demand behavior are proposed to solve the combinational explosion problem, and the combinational rules in the mechanism behavior are developed to extend the innovation of the principle solution. A case study on the pan mechanism design for a cooking robot is presented to demonstrate the implementation of intelligent reasoning based on the FABM model.

A novel hierarchical function action behavior mechanism (FABM) modeling framework is proposed to conduct intelligent mapping from the overall function to the principle solution, according to the requirements of customers. Based on the hierarchical modeling framework, an object oriented representation method is developed to express the inheritance and the interconnecting characteristics between any two objects. In addition, the rules of expansion and modification in demand behavior are proposed to solve the combinational explosion problem, and the combinational rules in the mechanism behavior are developed to extend the innovation of the principle solution. A case study on the pan mechanism design for a cooking robot is presented to demonstrate the implementation of intelligent reasoning based on the FABM model

Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name “deep patient”. We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.

Full Text Available The hierarchical HZSM-5 zeolite was prepared successfully by a simple NaOH treatment method. The concentration of NaOH solution was carefully tuned to optimal the zeolite acidity and pore structure. Under NaOH treatment conditions, a large number of mesopores, which interconnected with the retained micropores, were created to facilitate mass transfer performance. There are very good correlations between the decline of the relative zeolite crystallinity and the loss of micropores volume. The Ni nanoclusters were uniformly confined in the mesopores of hierarchical HZSM-5 by the excessive impregnation method. The direct deoxygenation in N2 and hydrodeoxygenation in H2 of the methyl laurate were compared respectively over the Ni/HZSM-5 catalysts. In the N2 atmosphere, the deoxygenation rate of the methyl laurate on the Ni/HZSM-5 catalyst is relatively slow. In the presence of H2, the synergistic effect between the hydrogenation function of the metal and the acid function of the zeolite supports can make the deoxygenation level more obvious. The yield of hydrocarbon products gradually reached the maximum with the appropriate treatment concentration of 1M NaOH, which could be attributed to the improved mass transfer in the hierarchical HZSM-5 supports.

Full Text Available This article reviews the electoral accountability dimension as a constitutive mechanism of Paraguayan democracy since 1989, analyzing the factors that limit the representation contained in the administration of the Paraguayan government as a result of the electoral process. We provide an analytic contrast between the democratic principles that guide the Paraguayan electoral institutions and the way their designs are enforced, identifying the gap between formal and informal rules as determinants of political representation. We also describe the barriers that prevent effective access of the population to political participation and competition, the advantages possessed by traditional political parties and interest groups, as well as their implications for democracy. We also review the degree to which elected officials are representative of historically excluded social groups as a result, emphasizing the way women, indigenous and peasant communities have potentially limited power to exercise political influence due to limitations to participation by structural and institutional factors.

Time has long been a major topic of study in social science, as in other sciences or in philosophy. Social scientists have tended to focus on collective representations of time, and on the ways in which these representations shape our everyday experiences. This contribution addresses work from such disciplines as anthropology, sociology and history. It focuses on several of the main theories that have preoccupied specialists in social science, such as the alleged "acceleration" of life and overgrowth of the present in contemporary Western societies, or the distinction between so-called linear and circular conceptions of time. The presentation of these theories is accompanied by some of the critiques they have provoked, in order to enable the reader to form her or his own opinion of them.

A new notion of controllability for quantum systems that takes advantage of the linear superposition of quantum states is introduced. We call such a notion von Neumann controllability, and it is shown that it is strictly weaker than the usual notion of pure state and operator controllability. We provide a simple and effective characterization of it by using tools from the theory of unitary representations of Lie groups. In this sense, we are able to approach the problem of control of quantum states from a new perspective, that of the theory of unitary representations of Lie groups. A few examples of physical interest and the particular instances of compact and nilpotent dynamical Lie groups are discussed

We define the Berry phase for the Heisenberg operators. This definition is motivated by the calculation of the phase shifts by different techniques. These techniques are: the solution of the Heisenberg equations of motion, the solution of the Schrodinger equation in coherent-state representation, and the direct computation of the evolution operator. Our definition of the Berry phase in the Heisenberg representation is consistent with the underlying supersymmetry of the model in the following sense. The structural blocks of the Hamiltonians of supersymmetrical quantum mechanics ('superpairs') are connected by transformations which conserve the similarity in structure of the energy levels of superpairs. These transformations include transformation of phase of the creation-annihilation operators, which are generated by adiabatic cyclic evolution of the parameters of the system.

This first text on the subject provides a comprehensive introduction to the representation theory of finite monoids. Carefully worked examples and exercises provide the bells and whistles for graduate accessibility, bringing a broad range of advanced readers to the forefront of research in the area. Highlights of the text include applications to probability theory, symbolic dynamics, and automata theory. Comfort with module theory, a familiarity with ordinary group representation theory, and the basics of Wedderburn theory, are prerequisites for advanced graduate level study. Researchers in algebra, algebraic combinatorics, automata theory, and probability theory, will find this text enriching with its thorough presentation of applications of the theory to these fields. Prior knowledge of semigroup theory is not expected for the diverse readership that may benefit from this exposition. The approach taken in this book is highly module-theoretic and follows the modern flavor of the theory of finite dimensional ...

A model of epidemic spreading in a population with a hierarchical structure of interpersonal interactions is described and investigated numerically. The structure of interpersonal connections is based on a scale-free network. Spatial localization of individuals belonging to different social groups, and the mobility of a contemporary community, as well as the effectiveness of different interpersonal interactions, are taken into account. Typical relations characterizing the spreading process, like a range of epidemic and epidemic curves, are discussed. The influence of preventive vaccinations on the spreading process is investigated. The critical value of preventively vaccinated individuals that is sufficient for the suppression of an epidemic is calculated. Our results are compared with solutions of the master equation for the spreading process and good agreement of the character of this process is found.

Epidemiological processes are studied within a recently proposed hierarchical network model using the susceptible-infected-refractory dynamics of an epidemic. Within the network model, a population may be characterized by H independent hierarchies or dimensions, each of which consists of groupings of individuals into layers of subgroups. Detailed numerical simulations reveal that for H>1, global spreading results regardless of the degree of homophily of the individuals forming a social circle. For H=1, a transition from global to local spread occurs as the population becomes decomposed into increasingly homophilous groups. Multiple dimensions in classifying individuals (nodes) thus make a society (computer network) highly susceptible to large-scale outbreaks of infectious diseases (viruses).

The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

Full Text Available Television broadcasting over IP networks (IPTV is one of a number of network applications that are except of media distribution also interested in data acquisition from group of information resources of variable size. IP-TV uses Real-time Transport Protocol (RTP protocol for media streaming and RTP Control Protocol (RTCP protocol for session quality feedback. Other applications, for example sensor networks, have data acquisition as the main task. Current solutions have mostly problem with scalability - how to collect and process information from large amount of end nodes quickly and effectively? The article deals with optimization of hierarchical system of data acquisition. Problem is mathematically described, delay minima are searched and results are proved by simulations.

We present an approach to the analysis and optimization of heterogeneous distributed embedded systems. The systems are heterogeneous not only in terms of hardware components, but also in terms of communication protocols and scheduling policies. When several scheduling policies share a resource......, they are organized in a hierarchy. In this paper, we address design problems that are characteristic to such hierarchically scheduled systems: assignment of scheduling policies to tasks, mapping of tasks to hardware components, and the scheduling of the activities. We present algorithms for solving these problems....... Our heuristics are able to find schedulable implementations under limited resources, achieving an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

Since the introduction of the growing hierarchical self-organizing map, much work has been done on self-organizing neural models with a dynamic structure. These models allow adjusting the layers of the model to the features of the input dataset. Here we propose a new self-organizing model which is based on a probabilistic mixture of multivariate Gaussian components. The learning rule is derived from the stochastic approximation framework, and a probabilistic criterion is used to control the growth of the model. Moreover, the model is able to adapt to the topology of each layer, so that a hierarchy of dynamic graphs is built. This overcomes the limitations of the self-organizing maps with a fixed topology, and gives rise to a faithful visualization method for high-dimensional data.

This paper describes the directions and present status of research in supervisory control for multimodular nuclear plants at ORNL as part of DOE's advanced controls program ACTO. The hierarchical supervisory structure envisioned for a PRISM-like supervisor closest to the process actuators and how it has actually been implemented for demonstration in a network of CPU's is presented next. Two demonstrations of supervisory control with an expert system are also described, one for control of a plant with a single reactor and turbine, the other for control of a plant with three reactors and one turbine. An appendix contains the mathematical basis for the novel approach to large scale system decomposition we have used in the demonstrations of supervisory distributed control of the single reactor plant. 6 refs., 5 figs

Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

The adsorption behavior of a series of fluorocarbon derivatives was examined on a set of microporous metal organic framework (MOF) sorbents and another set of hierarchical mesoporous MOFs. The microporous M-DOBDC (M = Ni, Co) showed a saturation uptake capacity for R12 of over 4 mmol/g at a very low relative saturation pressure (P/Po) of 0.02. In contrast, the mesoporous MOF MIL-101 showed an exceptionally high uptake capacity reaching over 14 mmol/g at P/Po of 0.4. Adsorption affinity in terms of mass loading and isosteric heats of adsorption were found to generally correlate with the polarizability of the refrigerant with R12 > R22 > R13 > R14 > methane. These results suggest the possibility of exploiting MOFs for separation of azeotropic mixtures of fluorocarbons and use in eco-friendly fluorocarbon-based adsorption cooling and refrigeration applications.

In this paper, we propose a new method for the visual reorganization of online analytical processing (OLAP) cubes that aims at improving their visualization. Our method addresses dimensions with hierarchically organized members. It uses a genetic algorithm that reorganizes k-ary trees. Genetic operators perform permutations of subtrees to optimize a visual homogeneity function. We propose several ways to reorganize an OLAP cube depending on which set of members is selected for the reorganization: all of the members, only the displayed members, or the members at a given level (level by level approach). The results that are evaluated by using optimization criteria show that our algorithm has a reliable performance even when it is limited to 1 minute runs. Our algorithm was integrated in an interactive 3D interface for OLAP. A user study was conducted to evaluate our approach with users. The results highlight the usefulness of reorganization in two OLAP tasks.

The degeneracy of energy levels in a quantum dot of Hall fluid, leading to conductance peaks, can be readily derived from the partition functions of conformal field theory. Their complete expressions can be found for Hall states with both Abelian and non-Abelian statistics, upon adapting known results for the annulus geometry. We analyze the Abelian states with hierarchical filling fractions, ν = m/(mp ± 1), and find a non-trivial pattern of conductance peaks. In particular, each one of them occurs with a characteristic multiplicity, which is due to the extended symmetry of the m-folded edge. Experimental tests of the multiplicity can shed more light on the dynamics of this composite edge. (fast track communication)

A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

This thesis looks into the ways subjective dimension of experience could be represented in artificial, non-biological systems, in particular information systems. The pivotal assumption is that experience as opposed to mainstream thinking in information science is not equal to knowledge, so that experience is a broader term which encapsulates both knowledge and subjective, affective component of experience, which so far has not been properly embraced by knowledge representation theories. This ...

This thesis looks into the ways subjective dimension of experience could be represented in artificial, non-biological systems, in particular information systems. The pivotal assumption is that experience as opposed to mainstream thinking in information science is not equal to knowledge, so that experience is a broader term which encapsulates both knowledge and subjective, affective component of experience, which so far has not been properly embraced by knowledge representation theories. Th...

The chemical properties and abundance ratios of galaxies provide important information about their formation histories. Galactic chemical evolution has been modelled in detail within the monolithic collapse scenario. These models have successfully described the abundance distributions in our Galaxy and other spiral discs, as well as the trends of metallicity and abundance ratios observed in early-type galaxies. In the last three decades, however, the paradigm of hierarchical assembly in a Cold Dark Matter (CDM) cosmology has revised the picture of how structure in the Universe forms and evolves. In this scenario, galaxies form when gas radiatively cools and condenses inside dark matter haloes, which themselves follow dissipationless gravitational collapse. The CDM picture has been successful at predicting many observed properties of galaxies (for example, the luminosity and stellar mass function of galaxies, color-magnitude or star formation rate vs. stellar mass distributions, relative numbers of early and late-type galaxies, gas fractions and size distributions of spiral galaxies, and the global star formation history), though many potential problems and open questions remain. It is therefore interesting to see whether chemical evolution models, when implemented within this modern cosmological context, are able to correctly predict the observed chemical properties of galaxies. With the advent of more powerfull telescopes and detectors, precise observations of chemical abundances and abundance ratios in various phases (stellar, ISM, ICM) offer the opportunity to obtain strong constraints on galaxy formation histories and the physics that shapes them. However, in order to take advantage of these observations, it is necessary to implement detailed modeling of chemical evolution into a modern cosmological model of hierarchical assembly.

Among the clinically relevant imaging techniques, computed tomography (CT) reaches the best spatial resolution. Sub-millimeter voxel sizes are regularly obtained. For investigations on true micrometer level lab-based μCT has become gold standard. The aim of the present study is the hierarchical investigation of a human knee post mortem using hard X-ray μCT. After the visualization of the entire knee using a clinical CT with a spatial resolution on the sub-millimeter range, a hierarchical imaging study was performed using a laboratory μCT system nanotom m. Due to the size of the whole knee the pixel length could not be reduced below 65 μm. These first two data sets were directly compared after a rigid registration using a cross-correlation algorithm. The μCT data set allowed an investigation of the trabecular structures of the bones. The further reduction of the pixel length down to 25 μm could be achieved by removing the skin and soft tissues and measuring the tibia and the femur separately. True micrometer resolution could be achieved after extracting cylinders of several millimeters diameters from the two bones. The high resolution scans revealed the mineralized cartilage zone including the tide mark line as well as individual calcified chondrocytes. The visualization of soft tissues including cartilage, was arranged by X-ray grating interferometry (XGI) at ESRF and Diamond Light Source. Whereas the high-energy measurements at ESRF allowed the simultaneous visualization of soft and hard tissues, the low-energy results from Diamond Light Source made individual chondrocytes within the cartilage visual.

We present a high angular resolution map of the 850 μm continuum emission of the Orion Molecular Cloud-3 (OMC 3) obtained with the Submillimeter Array (SMA); the map is a mosaic of 85 pointings covering an approximate area of 6.'5 × 2.'0 (0.88 × 0.27 pc). We detect 12 spatially resolved continuum sources, each with an H 2 mass between 0.3-5.7 M ☉ and a projected source size between 1400-8200 AU. All the detected sources are on the filamentary main ridge (n H 2 ≥10 6 cm –3 ), and analysis based on the Jeans theorem suggests that they are most likely gravitationally unstable. Comparison of multi-wavelength data sets indicates that of the continuum sources, 6/12 (50%) are associated with molecular outflows, 8/12 (67%) are associated with infrared sources, and 3/12 (25%) are associated with ionized jets. The evolutionary status of these sources ranges from prestellar cores to protostar phase, confirming that OMC-3 is an active region with ongoing embedded star formation. We detect quasi-periodical separations between the OMC-3 sources of ≈17''/0.035 pc. This spatial distribution is part of a large hierarchical structure that also includes fragmentation scales of giant molecular cloud (≈35 pc), large-scale clumps (≈1.3 pc), and small-scale clumps (≈0.3 pc), suggesting that hierarchical fragmentation operates within the Orion A molecular cloud. The fragmentation spacings are roughly consistent with the thermal fragmentation length in large-scale clumps, while for small-scale cores it is smaller than the local fragmentation length. These smaller spacings observed with the SMA can be explained by either a helical magnetic field, cloud rotation, or/and global filament collapse. Finally, possible evidence for sequential fragmentation is suggested in the northern part of the OMC-3 filament.

Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM’s applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models’ system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example. PMID:26778940

This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

We consider the transformation properties of integer sequences arising from the normal ordering of exponentiated boson ([a, a†] = 1) monomials of the form exp[λ(a†) r a], r = 1, 2, ..., under the composition of their exponential generating functions. They turn out to be of Sheffer type. We demonstrate that two key properties of these sequences remain preserved under substitutional composition: (a) the property of being the solution of the Stieltjes moment problem; and (b) the representation of these sequences through infinite series (Dobinski-type relations). We present a number of examples of such composition satisfying properties (a) and (b). We obtain new Dobinski-type formulae and solve the associated moment problem for several hierarchically defined combinatorial families of sequences

Full Text Available This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN. Firstly, a Relative Mixture Deformable Model (RMDM is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM's applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models' system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example.

Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

This paper investigates the differences in the discursive patterning of cases in Law and Management. It examines a corpus of 271 Law and Management cases and discusses the kind of information that these two disciplines call for and how discourses are constructed in discursive hierarchical patterns. A discursive hierarchical pattern is a model…

The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.

of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

Following criticism of Kohlberg’s theory of moral judgment, an empirical re-examination of hierarchical stage structure was desirable. Utilizing Piaget’s concept of reflective abstraction as a basis, the hierarchical stage structure was investigated using a new method. Study participants (553 Dutch

Large data sets are analyzed by hierarchical clustering using correlation as a similarity measure. This provides results that are superior to those obtained using a Euclidean distance similarity measure. A spatial continuity constraint may be applied in hierarchical clustering analysis of images.

The study examines general axiomatics of Balinski and Young and analyzes existed proportional representation methods using this approach. The second part of the paper provides new axiomatics based on rational choice models. New system of axioms is applied to study known proportional representation systems. It is shown that there is no proportional representation method satisfying a minimal set of the axioms (monotonicity and neutrality).

Effective use of mathematical representation is key to supporting student learning. In "Principles to Actions: Ensuring Mathematical Success for All" (NCTM 2014), "use and connect mathematical representations" is one of the effective Mathematics Teaching Practices. By using different representations, students examine concepts…

htmlabstractIn this note we give a new representation for closed sets under which the robust zero set of a function is computable. We call this representation the component cover representation. The computation of the zero set is based on topological index theory, the most powerful tool for finding

There exist several theorems which state that when a matroid is representable over distinct fields F1,...,Fk , it is also representable over other fields. We prove a theorem, the Lift Theorem, that implies many of these results. First, parts of Whittle's characterization of representations of

This article deals with the equivalence of representations of behaviors of linear differential systems. In general, the behavior of a given linear differential system has many different representations. In this paper we restrict ourselves to kernel and image representations. Two kernel

An ice templating coupled with hard templating and physical activation approach is reported for the synthesis of hierarchically porous carbon monoliths with tunable porosities across all three length scales (macro- meso- and micro), with ultrahigh specific pore volumes [similar]11.4 cm3 g−1. The materials function well as amine impregnated supports for CO2 capture and as supercapacitor electrodes.

This is the second part of the third volume of the four-volume series, a daring project of CEU Press, presenting the most important texts that triggered and shaped the processes of nation-building in the many countries of Central and Southeast Europe. The aim is to confront ‘mainstream’ and seemingly successful national discourses with each other, thus creating a space for analyzing those narratives of identity which became institutionalized as “national canons.” After the volumes focousing ...

Recent observations show that the space density of luminous active galactic nuclei (AGNs) peaks at higher redshifts than that of faint AGNs. This downsizing trend in the AGN evolution seems to be contradictory to the hierarchical structure formation scenario. In this study, we present the AGN space density evolution predicted by a semi-analytic model of galaxy and AGN formation based on the hierarchical structure formation scenario. We demonstrate that our model can reproduce the downsizing trend of the AGN space density evolution. The reason for the downsizing trend in our model is a combination of the cold gas depletion as a consequence of star formation, the gas cooling suppression in massive halos, and the AGN lifetime scaling with the dynamical timescale. We assume that a major merger of galaxies causes a starburst, spheroid formation, and cold gas accretion onto a supermassive black hole (SMBH). We also assume that this cold gas accretion triggers AGN activity. Since the cold gas is mainly depleted by star formation and gas cooling is suppressed in massive dark halos, the amount of cold gas accreted onto SMBHs decreases with cosmic time. Moreover, AGN lifetime increases with cosmic time. Thus, at low redshifts, major mergers do not always lead to luminous AGNs. Because the luminosity of AGNs is correlated with the mass of accreted gas onto SMBHs, the space density of luminous AGNs decreases more quickly than that of faint AGNs. We conclude that the anti-hierarchical evolution of the AGN space density is not contradictory to the hierarchical structure formation scenario.

Recent observations show that the space density of luminous active galactic nuclei (AGNs) peaks at higher redshifts than that of faint AGNs. This downsizing trend in the AGN evolution seems to be contradictory to the hierarchical structure formation scenario. In this study, we present the AGN space density evolution predicted by a semi-analytic model of galaxy and AGN formation based on the hierarchical structure formation scenario. We demonstrate that our model can reproduce the downsizing trend of the AGN space density evolution. The reason for the downsizing trend in our model is a combination of the cold gas depletion as a consequence of star formation, the gas cooling suppression in massive halos, and the AGN lifetime scaling with the dynamical timescale. We assume that a major merger of galaxies causes a starburst, spheroid formation, and cold gas accretion onto a supermassive black hole (SMBH). We also assume that this cold gas accretion triggers AGN activity. Since the cold gas is mainly depleted by star formation and gas cooling is suppressed in massive dark halos, the amount of cold gas accreted onto SMBHs decreases with cosmic time. Moreover, AGN lifetime increases with cosmic time. Thus, at low redshifts, major mergers do not always lead to luminous AGNs. Because the luminosity of AGNs is correlated with the mass of accreted gas onto SMBHs, the space density of luminous AGNs decreases more quickly than that of faint AGNs. We conclude that the anti-hierarchical evolution of the AGN space density is not contradictory to the hierarchical structure formation scenario.

The gecko relies on van der Waals forces to cling onto surfaces with a variety of topography and composition. The hierarchical fibrillar structures on their climbing feet, ranging from mesoscale to nanoscale, are hypothesized to be key elements for the animal to conquer both smooth and rough surfaces. An epoxy-based artificial hierarchical fibrillar adhesive was prepared to study the influence of the hierarchical structures on the properties of a dry adhesive. The presented experiments highlight the advantages of a hierarchical structure despite a reduction of overall density and aspect ratio of nanofibrils. In contrast to an adhesive containing only nanometer-size fibrils, the hierarchical fibrillar adhesives exhibited a higher adhesion force and better compliancy when tested on an identical substrate.

Recent advances have been made in the use of p-type finite element method (FEM) for structural and fluid dynamics problems that hold promise for reactor physics problems. These advances include using hierarchic shape functions, element-by-element iterative solvers and more powerful mapping techniques. Use of the hierarchic shape functions allows greater flexibility and efficiency in implementing energy-dependent flux expansions and incorporating localized refinement of the solution space. The irregular matrices generated by the p-type FEM can be solved efficiently using element-by-element conjugate gradient iterative solvers. These solvers do not require storage of either the global or local stiffness matrices and can be highly vectorized. Mapping techniques based on blending function interpolation allow exact representation of curved boundaries using coarse element grids. These features were implemented in a developmental two-dimensional neutron diffusion program based on the use of hierarchic shape functions (FEM2DH). Several aspects in the effective use of p-type analysis were explored. Two choices of elemental preconditioning were examined--the proper selection of the polynomial shape functions and the proper number of functions to use. Of the five shape function polynomials tested, the integral Legendre functions were the most effective. The serendipity set of functions is preferable over the full tensor product set. Two global preconditioners were also examined--simple diagonal and incomplete Cholesky. The full effectiveness of the finite element methodology was demonstrated on a two-region, two-group cylindrical problem but solved in the x-y coordinate space, using a non-structured element grid. The exact, analytic eigenvalue solution was achieved with FEM2DH using various combinations of element grids and flux expansions

Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

Full Text Available In divisive normalization models of covert attention, spike rate modulations are commonly used as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly those in gamma-band frequencies (25 to 100 Hz. Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple hierarchical cascade of normalization models simulating different cortical areas however leads to signal degradation and a loss of discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate oscillatory phase entrainment into our model, a mechanism previously proposed as the communication-through-coherence (CTC hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO model reproduces several additional spatial and temporal aspects of attentional modulation.

Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25-100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the "communication-through-coherence" (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention.

Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

This paper considers an approach for representing nuclear data that is qualitatively different from the approach currently adopted by the nuclear science community. Specifically, we examine a representation in which complicated data is described through collections of distinct and self-contained simple data structures. This structure-based representation is compared with the ENDF and ENDL formats, which can be roughly characterized as dictionary-based representations. A pilot data representation for replacing the format currently used at LLNL is presented. Examples are given as is a discussion of promises and shortcomings associated with moving from traditional dictionary-based formats to a structure-rich or class-like representation

The Dirac representation theory deals usually with the amplitude formalism of the quantum theory. An introduction is given into a theory of some other representations, which are applicable in the density matrix formalism and can naturally be called phase space representations (PSR). They use terms of phase space variables (x and p simultaneously) and give a description, close to the classical phase space description. Definitions and algebraic properties are given in quantum mechanics for such PSRs as the Wigner representation, coherent state representation and others. Completeness relations of a matrix type are used as a starting point. The case of quantum field theory is also outlined

With advances in porous carbon synthesis techniques, hierarchically porous carbon (HPC) materials are being utilized as relatively new porous carbon sorbents for CO2 capture applications. These HPC materials were used as a platform to prepare samples with differing textural properties and morphologies to elucidate structure-property relationships. It was found that high microporous content, rather than overall surface area was of primary importance for predicting good CO2 capture performance. Two HPC materials were analyzed, each with near identical high surface area (~2700 m2/g) and colossally high pore volume (~10 cm3/g), but with different microporous content and pore size distributions, which led to dramatically different CO2 capture performance. Overall, large pore volumes obtained from distinct mesopores were found to significantly impact adsorption performance. From these results, an optimized HPC material was synthesized that achieved a high CO2 capacity of ~3.7 mmol/g at 25°C and 1 bar.

I generalize to the case of gauge groups over non-trivial principal bundles representations that I. M. Gelfand, M. I. Graev and A. M. Versik constructed for current groups. The gauge group of the principal G-bundle P over M, (G a Lie group with an euclidean structure, M a compact, connected and oriented manifold), as the smooth sections of the associated group bundle is presented and studied in chapter I. Chapter II describes the symmetric algebra associated to a Hilbert space, its Hilbert structure, a convenient exponential and a total set that later play a key role in the construction of the representation. Chapter III is concerned with the calculus needed to make the space of Lie algebra valued 1-forms a Gaussian L^2-space. This is accomplished by studying general projective systems of finitely measurable spaces and the corresponding systems of sigma -additive measures, all of these leading to the description of a promeasure, a concept modeled after Bourbaki and classical measure theory. In the case of a locally convex vector space E, the corresponding Fourier transform, family of characters and the existence of a promeasure for every quadratic form on E^' are established, so the Gaussian L^2-space associated to a real Hilbert space is constructed. Chapter III finishes by exhibiting the explicit Hilbert space isomorphism between the Gaussian L ^2-space associated to a real Hilbert space and the complexification of its symmetric algebra. In chapter IV taking as a Hilbert space H the L^2-space of the Lie algebra valued 1-forms on P, the gauge group acts on the motion group of H defining in an straight forward fashion the representation desired.

The basic equations of quantum scattering are translated into the Wigner representation. This puts quantum mechanics in the form of a stochastic process in phase space. Instead of complex valued wavefunctions and transition matrices, one now works with real-valued probability distributions and source functions, objects more responsive to physical intuition. Aside from writing out certain necessary basic expressions, the main purpose is to develop and stress the interpretive picture associated with this representation and to derive results used in applications published elsewhere. The quasiclassical guise assumed by the formalism lends itself particularly to approximations of complex multiparticle scattering problems is laid. The foundation for a systematic application of statistical approximations to such problems. The form of the integral equation for scattering as well as its mulitple scattering expansion in this representation are derived. Since this formalism remains unchanged upon taking the classical limit, these results also constitute a general treatment of classical multiparticle collision theory. Quantum corrections to classical propogators are discussed briefly. The basic approximation used in the Monte Carlo method is derived in a fashion that allows for future refinement and includes bound state production. The close connection that must exist between inclusive production of a bound state and of its constituents is brought out in an especially graphic way by this formalism. In particular one can see how comparisons between such cross sections yield direct physical insight into relevant production mechanisms. A simple illustration of scattering by a bound two-body system is treated. Simple expressions for single- and double-scattering contributions to total and differential cross sections, as well as for all necessary shadow corrections thereto, are obtained and compared to previous results of Glauber and Goldberger

The field of Action Recognition has seen a large increase in activity in recent years. Much of the progress has been through incorporating ideas from single-frame object recognition and adapting them for temporal-based action recognition. Inspired by the success of interest points in the 2D spatial domain, their 3D (space-time) counterparts typically form the basic components used to describe actions, and in action recognition the features used are often engineered to fire sparsely. This is to ensure that the problem is tractable; however, this can sacrifice recognition accuracy as it cannot be assumed that the optimum features in terms of class discrimination are obtained from this approach. In contrast, we propose to initially use an overcomplete set of simple 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining. This allows large amounts of data to be searched for frequently reoccurring patterns of features. At each level of the hierarchy, the mined compound features become more complex, discriminative, and sparse. This results in fast, accurate recognition with real-time performance on high-resolution video. As the compound features are constructed and selected based upon their ability to discriminate, their speed and accuracy increase at each level of the hierarchy. The approach is tested on four state-of-the-art data sets, the popular KTH data set to provide a comparison with other state-of-the-art approaches, the Multi-KTH data set to illustrate performance at simultaneous multiaction classification, despite no explicit localization information provided during training. Finally, the recent Hollywood and Hollywood2 data sets provide challenging complex actions taken from commercial movie sequences. For all four data sets, the proposed hierarchical

A spectral representation of stationary 2-point functions is investigated based on the operator formalism in stochastic quantization. Assuming the existence of asymptotic non-interacting fields, we can diagonalize the total Hamiltonian in terms of asymptotic fields and show that the correlation length along the fictious time is proportional to the physical mass expected in the usual field theory. A relation between renormalization factors in the operator formalism is derived as a byproduct and its validity is checked with the perturbative results calculated in this formalism. (orig.)

The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled.The book contains information that helps with building semantic, content-based

In the standard interpretation of quantum mechanics, the state is described by an abstract wave function in the representation space. Conversely, in a realistic interpretation, the quantum state is replaced by a probability distribution of physical quantities. Bohm mechanics is a consistent example of realistic theory, where the wave function and the particle positions are classically defined quantities. Recently, we proved that the probability distribution in a realistic theory cannot be a quadratic function of the quantum state, in contrast to the apparently obvious suggestion given by the Born rule for transition probabilities. Here, we provide a simplified version of this proof.

Evolved representations in evolutionary computation are often fragile, which can impede representation-dependent mechanisms such as self-adaptation. In contrast, evolved representations in nature are robust, evolvable, and creatively exploit available representational features. This paper provide...

In this work, we aim to prepare effective and long-term stable hierarchical silver nanostructures serving as surface-enhanced Raman scattering (SERS) substrates simply via displacement reaction on Aluminum foils. In our experiments, Hexadecyltrimethylammonium bromide (CTAB) is used as cationic surfactant to control the velocity of displacement reaction as well as the hierarchical morphology of the resultant. We find that the volume ratio of CTAB to AgNO3 plays a dominant role in regulating the hierarchical structures besides the influence of displacement reaction time. These as-prepared hierarchical morphologies demonstrate excellent SERS sensitivity, structural stability and reproducibility with low values of relative standard deviation less than 20 %. The high SERS analytical enhancement factor of ~6.7 × 108 is achieved even at the concentration of Crystal Violet (CV) as low as 10-7 M, which is sufficient for single-molecule detection. The detection limit of CV is 10-9 M in this study. We believe that this simple and rapid approach integrating advantages of low-cost production and high reproducibility would be a promising way to facilitate routine SERS detection and will get wide applications in chemical synthesis.

Hierarchical mesoporous LiNi1/3Co1/3Mn1/3O2 spheres have been synthesized by urea-assisted solvothermal method with adding Triton X-100. The structure and morphology of the as-prepared materials were analyzed by X-ray diffraction and electron microscope. The results show that the as-prepared samples can be indexed as hexagonal layered structure with hierarchical architecture, and the possible formation mechanism is speculated. When evaluated as cathode material, the hierarchical mesoporous LiNi1/3Co1/3Mn1/3O2 spheres show good electrochemical properties with high initial discharge capacity of 129.9 mAh g-1, and remain the discharge capacity of 95.5 mAh g-1 after 160 cycles at 10C. The excellent electrochemical performance of the as-prepared sample can be attributed to its stable hierarchical mesoporous framework in conjunction with large specific surface, low cation mixing and small particle size. They not only provide a large number of reaction sites for surface or interface reaction, but also shorten the diffusion length of Li+ ions. Meanwhile, the mesoporous spheres composed of nanoparticles can contribute to high rate ability and buffer volume changes during charge/discharge process.

In this work, hierarchically porous MgCo2O4 nanochain networks were successfully synthesized by a novel template-free method realized via a facile solvothermal synthesis followed by a heat treatment. The morphologies of MgCo2O4 precursor could be adjusted from nanosheets to nanobelts and finally to interwoven nanowires, depending on the volume ratio of diethylene glycol to deionized water in the solution. After calcination, the interwoven precursor nanowires were transformed to hierarchical MgCo2O4 nanochain networks with marco-/meso-porosity, which are composed of 10-20 nm nanoparticles connected one by one. Moreover, the relative formation mechanism of the MgCo2O4 nanochain networks was discussed. More importantly, when evaluated as catalytic additive for AP thermal decomposition, the MgCo2O4 nanochain networks show excellent accelerating effect. It is benefited from the unique hierarchically porous network structure and multicomponent effect, which effectively accelerates ammonia oxidation and {{{{ClO}}}4}- species dissociation. This approach opens the way to design other hierarchically porous multicomponent metal oxides.

Normal-tension glaucoma (NTG) is a heterogenous disease, and there is still controversy about subclassifications of this disorder. On the basis of spectral-domain optical coherence tomography (SD-OCT), we subdivided NTG with hierarchical cluster analysis using optic nerve head (ONH) parameters and retinal nerve fiber layer (RNFL) thicknesses. A total of 200 eyes of 200 NTG patients between March 2011 and June 2012 underwent SD-OCT scans to measure ONH parameters and RNFL thicknesses. We classified NTG into homogenous subgroups based on these variables using a hierarchical cluster analysis, and compared clusters to evaluate diverse NTG characteristics. Three clusters were found after hierarchical cluster analysis. Cluster 1 (62 eyes) had the thickest RNFL and widest rim area, and showed early glaucoma features. Cluster 2 (60 eyes) was characterized by the largest cup/disc ratio and cup volume, and showed advanced glaucomatous damage. Cluster 3 (78 eyes) had small disc areas in SD-OCT and were comprised of patients with significantly younger age, longer axial length, and greater myopia than the other 2 groups. A hierarchical cluster analysis of SD-OCT scans divided NTG patients into 3 groups based upon ONH parameters and RNFL thicknesses. It is anticipated that the small disc area group comprised of younger and more myopic patients may show unique features unlike the other 2 groups.

Young massive star clusters (YMCs) spanning 104-108 M⊙ in mass generally have similar radial surface density profiles, with an outer power-law index typically between -2 and -3. This similarity suggests that they are shaped by scale-free physics at formation. Recent multi-physics MHD simulations of YMC formation have also produced populations of YMCs with this type of surface density profile, allowing us to narrow down the physics necessary to form a YMC with properties as observed. We show that the shallow density profiles of YMCs are a natural result of phase-space mixing that occurs as they assemble from the clumpy, hierarchically-clustered configuration imprinted by the star formation process. We develop physical intuition for this process via analytic arguments and collisionless N-body experiments, elucidating the connection between star formation physics and star cluster structure. This has implications for the early-time structure and evolution of proto-globular clusters, and prospects for simulating their formation in the FIRE cosmological zoom-in simulations.

Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics

Hierarchical crowdsourcing networks (HCNs) provide a useful mechanism for social mobilization. However, spontaneous evolution of the complex resource allocation dynamics can lead to undesirable herding behaviours in which a small group of reputable workers are overloaded while leaving other workers idle. Existing herding control mechanisms designed for typical crowdsourcing systems are not effective in HCNs. In order to bridge this gap, we investigate the herding dynamics in HCNs and propose a Lyapunov optimization based decision support approach - the Reputation-aware Task Sub-delegation approach with dynamic worker effort Pricing (RTS-P) - with objective functions aiming to achieve superlinear time-averaged collective productivity in an HCN. By considering the workers' current reputation, workload, eagerness to work, and trust relationships, RTS-P provides a systematic approach to mitigate herding by helping workers make joint decisions on task sub-delegation, task acceptance, and effort pricing in a distributed manner. It is an individual-level decision support approach which results in the emergence of productive and robust collective patterns in HCNs. High resolution simulations demonstrate that RTS-P mitigates herding more effectively than state-of-the-art approaches.

Full Text Available The morphological reconstruction based on geodesic operators, is a powerful tool in mathematical morphology. The general definition of this reconstruction supposes the use of a marker function f which is not necessarily related to the function g to be built. However, this paper deals with operations where the marker function is defined from given characteristic regions of the initial function f, as it is the case, for instance, for the extrema (maxima or minima but also for the saddle zones. Firstly, we show that the intuitive definition of a saddle zone is not easy to handle, especially when digitised images are involved. However, some of these saddle zones (regional ones also called overflow zones can be defined, this definition providing a simple algorithm to extract them. The second part of the paper is devoted to the use of these overflow zones as markers in image reconstruction. This reconstruction provides a new function which exhibits a new hierarchy of extrema. This hierarchy is equivalent to the hierarchy produced by the so-called waterfall algorithm. We explain why the waterfall algorithm can be achieved by performing a watershed transform of the function reconstructed by its initial watershed lines. Finally, some examples of use of this hierarchical segmentation are described.

Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)