In this paper, we propose a framework for accurate plant modeling constrained to actual plant-light interaction along a time-interval. To this end, several plant models have been generated by using data from different sources such as LiDAR scanning, optical cameras and multispectral sensors. In contrast to previous approaches that mostly focus on realistic rendering purposes, the main objective of our method is to improve the multiview stereo reconstruction of plant structures and the prediction of the growth of existing plants according to the influence of real light incidence. Our experimental results are oriented to olive trees, which are formed by many thin branches and dense foliage. Plant reconstruction is a challenging task due to self-occlusion. Our approach is based on inverse modeling to generate a parametric model which describes how plants evolve in a time interval by considering the surrounding environment. A multispectral sensor has been used to characterize input plant models from reflectance values for each narrow-band. We propose the fusion of heterogeneous data to achieve a more accurate modeling of plant structure and the prediction of the branching fate.

Identification of specific landmarks in tissues is fundamental for understanding the human anatomy in medical applications. Specifically, the assessment of bone features allows to detect several pathologies in orthopedics. The recognition has been formerly carried out via visual identification, providing insufficient accuracy. Automatic solutions are required to improve the precision and minimize diagnostic and surgical planning time. In this paper, we study distal humerus landmarks and a new algorithm to automatically detect them in a reasonable time. Our technique does not require a prior training, as a geometrical approach with a spatial decomposition is used to explore several regions of interest of the bone. Finally, a set of experiments are performed, showing promising results.

The virtual representation of bone tissue is one of the pending challenges of infographics in the field of traumatology. This advance could mean a reduction in the time and effort that is currently used in the analysis of a bank of medical images, as it is done manually. Our proposal aims to lay the foundations of the elements that must be taken into account not only geometrically, but also from a medical point of view. In this article we focus on the segmentation of a bone model, establish the limits for its representation and introduce the main characteristics of the microstructures that form in the bone tissue.

Images from remote sensing are presented as the main and most relevant data produced by this technology due to the numerous applications in the most diverse areas of knowledge. In this context, simulating these products can mean a significant reduction in costs, time, as well as assisting in the design stages of future sensors in the laboratory. One of the challenges of simulation is to reduce as much as possible the gap between it and the reality one wishes to study. In this context, the purpose of this work is, from a brief review of the methods of simulation of passive sensor images, present a proposal to classify them, to cite some examples of each, to present the conceptual model that is being developed, to mention aspects which provide versatility and functionality as well as some results.

The generation of a virtual representation of the bones and fragments is an artificial step required in order to obtain helpful models to work with in a simulation. Nowadays, the Marching Cubes algorithm is a de facto standard for the generation of geometric models from medical images. However, bone fragments models generated by Marching Cubes are huge and contain many unconnected geometric elements inside the bone due to the trabecular tissue. The development of new methods to generate geometrically simple 3D models from CT image stacks that preserve the original information extracted from them would be of great interest. In order to achieve that, a preliminary study for the development of a new method to generate triangle meshes from segmented medical images is presented. The method does not modify the points extracted from CT images, and avoid generating triangles inside the bone. The aim of this initial study is to analyse if a spatial decomposition may help in the process of generating a triangle mesh by using a divide-and-conquer approach. The method is under development and therefore this paper only presents some initial results and exposes the detected issues to be improved.

Despite the existence and popularity of many new and classical computer languages, the evolu-
tionary algorithm community has mostly exploited a few popular ones, avoiding them, especially
if they are not compiled, under the asumption that compiled languages are always faster than
interpreted languages. Wide-ranging performance analyses of implementation of evolutionary al-
gorithms are usually focused on algorithmic implementation details and data structures, but these
are usually limited to specific languages. In this paper we measure the execution speed of three
common operations in genetic algorithms in many popular and emerging computer languages us-
ing different data structures and implementation alternatives, with several objectives: create a
ranking for these operations, compare relative speeds taking into account different chromosome
sizes and data structures, and dispel or show evidence for several hypotheses that underlie most
popular evolutionary algorithm libraries and applications. We find that there is indeed basis to
consider compiled languages, such as Java, faster in a general sense, but there are other languages,
including interpreted ones, that can hold its ground against them.

Illumination and shadows are essential to obtain realistic virtual environments. Nevertheless, large scenes like urban cities demand a huge amount of geometry that must somehow be structured or reduced in order to be manageable. In this paper we propose a novel real-time method to determine the shadowed and illuminated areas in large scenes, specially suitable for urban environments. Our approach uses the polar diagram as a tessellation plane, and a ray-casting process to obtain the visible areas. This solution derives the exact illuminated area with a high performance. Moreover, our approach is also used to determine the visible portion of the scene from a pedestrian viewpoint. As a result, we only have to render the visible part of the scene, which is considerably lower than the global scene.

Although the reconstruction of 3D models from medical images is not an easy task, there are many algorithms to perform it. However, the reconstructed models are usually large, have a lot of outliers and have not a correct topology. To interact with these models, the methods must be fast and robust. In this paper, we present an application that enables the interaction with models reconstructed from medical images. The application uses Marching Cubes to generate triangle soups from the medical scans. Then, the user can define models by selecting sets of triangles. Once the models have been defined, the application allows to interact with them. In addition, a detailed collision detection is calculated between the models in the scene not only to avoid that models in the scene collide, but also to determine which triangles are overlapping. In addition, the calculation of distances and nearest points provides visual aid when the user is interacting with the models. Finally, the Leonar3Do system have been incorporated to improve the interaction and to provide stereo visualization. The presented application can be used in the field of education since users can manipulate individual body parts to examine them. Moreover, the application can be utilized in the preparation of an intervention or even as a guide for it, since it enables the utilization of models reconstructed from real medical scans.

Several algorithms have been proposed during the past years to solve the ray-triangle intersection test. In this paper we collect the most prominent solutions and describe how to parallelize them on modern programmable graphics processing units (GPUs) by means of NVIDIA CUDA. This paper also provides a comprehensive performance analysis based on several optional features and optimizations (such as back-face culling and the use of pre-computed values) that allowed us to determine the influence of each factor on the performance. Finally, we analyze the architecture of the GPU and its impact on the parallel implementation of each method, as well as the approach used to achieve a high-performance fine-grained parallel computation on the ray-triangle test.

This paper presents an experimental study in which the effectiveness of the L-Co-R method is tested.
L-Co-R is a co-evolutionary algorithm to time series forecasting that evolves, on one hand, RBFNs building an appropriate architecture of net, and on the other hand, sets of time lags that represents the time series in order to perform the forecasting using, at the same time, its own forecasted values. This coevolutive approach makes possible to divide the main problem into two subproblems where every individual of one population cooperates with the individuals of the other. The goal of this work is to analyze the results obtained by {\metodo} comparing with other methods from the time series forecasting field. For that, 20 time series and 5 different methods found in the literature have been selected, and 3 distinct quality measures have been used to show the results. Finally, a statistical study confirms the good results of L-Co-R in most cases.