It is well known that during the developments in the
economic sector and through the financial crises occur everywhere in
the whole world, volatility measurement is the most important
concept in financial time series. Therefore in this paper we discuss
the volatility for Amman stocks market (Jordan) for certain period of
time. Since wavelet transform is one of the most famous filtering
methods and grows up very quickly in the last decade, we compare
this method with the traditional technique, Fast Fourier transform to
decide the best method for analyzing the volatility. The comparison
will be done on some of the statistical properties by using Matlab
program.

Clustering is one of an interesting data mining topics
that can be applied in many fields. Recently, the problem of cluster
analysis is formulated as a problem of nonsmooth, nonconvex optimization,
and an algorithm for solving the cluster analysis problem
based on nonsmooth optimization techniques is developed. This
optimization problem has a number of characteristics that make it
challenging: it has many local minimum, the optimization variables
can be either continuous or categorical, and there are no exact
analytical derivatives. In this study we show how to apply a particular
class of optimization methods known as pattern search methods
to address these challenges. These methods do not explicitly use
derivatives, an important feature that has not been addressed in
previous studies. Results of numerical experiments are presented
which demonstrate the effectiveness of the proposed method.

In this paper, the concepts of dichotomous logistic
regression (DLR) with leave-one-out (L-O-O) were discussed. To
illustrate this, the L-O-O was run to determine the importance of the
simulation conditions for robust test of spread procedures with good
Type I error rates. The resultant model was then evaluated. The
discussions included 1) assessment of the accuracy of the model, and
2) parameter estimates. These were presented and illustrated by
modeling the relationship between the dichotomous dependent
variable (Type I error rates) with a set of independent variables (the
simulation conditions). The base SAS software containing PROC
LOGISTIC and DATA step functions can be making used to do the
DLR analysis.

The social force model which belongs to the
microscopic pedestrian studies has been considered as the supremacy
by many researchers and due to the main feature of reproducing the
self-organized phenomena resulted from pedestrian dynamic. The
Preferred Force which is a measurement of pedestrian-s motivation to
adapt his actual velocity to his desired velocity is an essential term on
which the model was set up. This Force has gone through stages of
development: first of all, Helbing and Molnar (1995) have modeled
the original force for the normal situation. Second, Helbing and his
co-workers (2000) have incorporated the panic situation into this
force by incorporating the panic parameter to account for the panic
situations. Third, Lakoba and Kaup (2005) have provided the
pedestrians some kind of intelligence by incorporating aspects of the
decision-making capability. In this paper, the authors analyze the
most important incorporations into the model regarding the preferred
force. They make comparisons between the different factors of these
incorporations. Furthermore, to enhance the decision-making ability
of the pedestrians, they introduce additional features such as the
familiarity factor to the preferred force to let it appear more
representative of what actually happens in reality.

Mathematical justifications are given for a simulation technique of multivariate nonGaussian random processes and fields based on Rosenblatt-s transformation of Gaussian processes. Different types of convergences are given for the approaching sequence. Moreover an original numerical method is proposed in order to solve the functional equation yielding the underlying Gaussian process autocorrelation function.

A spanning tree of a connected graph is a tree which
consists the set of vertices and some or perhaps all of the edges from
the connected graph. In this paper, a model for spanning tree
transformation of connected graphs into single-row networks, namely
Spanning Tree of Connected Graph Modeling (STCGM) will be
introduced. Path-Growing Tree-Forming algorithm applied with
Vertex-Prioritized is contained in the model to produce the spanning
tree from the connected graph. Paths are produced by Path-Growing
and they are combined into a spanning tree by Tree-Forming. The
spanning tree that is produced from the connected graph is then
transformed into single-row network using Tree Sequence Modeling
(TSM). Finally, the single-row routing problem is solved using a
method called Enhanced Simulated Annealing for Single-Row
Routing (ESSR).

In the present article, a new class of solutions of
Einstein field equations is investigated for a spherically symmetric
space-time when the source of gravitation is a perfect fluid. All the
solutions have been derived by making some suitable arrangements
in the field equations. The solutions so obtained have been seen to
describe Schwarzschild interior solutions. Most of the solutions are
subjected to the reality conditions. As far as the authors are aware the
solutions are new.

The Elliptic Curve Digital Signature Algorithm
(ECDSA) is the elliptic curve analogue of DSA, where it is a digital
signature scheme designed to provide a digital signature based on a
secret number known only to the signer and also on the actual
message being signed. These digital signatures are considered the
digital counterparts to handwritten signatures, and are the basis for
validating the authenticity of a connection. The security of these
schemes results from the infeasibility to compute the signature
without the private key. In this paper we introduce a proposed to
development the original ECDSA with more complexity.

The implicit block methods based on the backward
differentiation formulae (BDF) for the solution of stiff initial value
problems (IVPs) using variable step size is derived. We construct a
variable step size block methods which will store all the coefficients
of the method with a simplified strategy in controlling the step size
with the intention of optimizing the performance in terms of
precision and computation time. The strategy involves constant,
halving or increasing the step size by 1.9 times the previous step size.
Decision of changing the step size is determined by the local
truncation error (LTE). Numerical results are provided to support the
enhancement of method applied.

The quality of Ribbed Smoked Sheets
(RSS) primarily based on color, dryness, and the presence or
absence of fungus and bubbles. This quality is strongly
influenced by the drying and fumigation process namely
smoking process. Smoking that is held in high temperature
long time will result scorched dark brown sheets, whereas if
the temperature is too low or slow drying rate would resulted
in less mature sheets and growth of fungus. Therefore need to
find the time and temperature for optimum quality of sheets.
Enhance, unmonitored heat and mass transfer during smoking
process lead to high losses of energy balance. This research
aims to generate simple empirical mathematical model
describing the effect of smoking time and temperature to RSS
quality of color, water content, fungus and bubbles. The
second goal of study was to analyze energy balance during
smoking process. Experimental study was conducted by
measuring temperature, residence time and quality parameters
of 16 sheets sample in smoking rooms. Data for energy
consumption balance such as mass of fuel wood, mass of
sheets being smoked, construction temperature, ambient
temperature and relative humidity were taken directly along
the smoking process. It was found that mathematical model
correlating smoking temperature and time with color is Color
= -169 - 0.184 T4 - 0.193 T3 - 0.160 0.405 T1 + T2 + 0.388 t1
+3.11 t2 + 3.92t3 + 0.215 t4 with R square 50.8% and with
moisture is Moisture = -1.40-0.00123 T4 + 0.00032 T3 +
0.00260 T2 - 0.00292 T1 - 0.0105 t1 + 0.0290 t2 + 0.0452 t3
+ 0.00061 t4 with R square of 49.9%. Smoking room energy
analysis found useful energy was 27.8%. The energy stored in
the material construction 7.3%. Lost of energy in conversion
of wood combustion, ventilation and others were 16.6%. The
energy flowed out through the contact of material construction
with the ambient air was found to be the highest contribution
to energy losses, it reached 48.3%.

Recently, the findings on the MEG iterative scheme has demonstrated to accelerate the convergence rate in solving any system of linear equations generated by using approximation equations of boundary value problems. Based on the same scheme, the aim of this paper is to investigate the capability of a family of four-point block iterative methods with a weighted parameter, ω such as the 4 Point-EGSOR, 4 Point-EDGSOR, and 4 Point-MEGSOR in solving two-dimensional elliptic partial differential equations by using the second-order finite difference approximation. In fact, the formulation and implementation of three four-point block iterative methods are also presented. Finally, the experimental results show that the Four Point MEGSOR iterative scheme is superior as compared with the existing four point block schemes.

Assume that we have m identical graphs where the
graphs consists of paths with k vertices where k is a positive integer.
In this paper, we discuss certain labelling of the m graphs called
c-Erdösian for some positive integers c. We regard labellings of the
vertices of the graphs by positive integers, which induce the edge
labels for the paths as the sum of the two incident vertex labels.
They have the property that each vertex label and edge label appears
only once in the set of positive integers {c, . . . , c+6m- 1}. Here,
we show how to construct certain c-Erdösian of m paths with 2 and
3 vertices by using Skolem sequences.

A topologically oriented neural network is very
efficient for real-time path planning for a mobile robot in changing
environments. When using a recurrent neural network for this
purpose and with the combination of the partial differential equation
of heat transfer and the distributed potential concept of the network,
the problem of obstacle avoidance of trajectory planning for a
moving robot can be efficiently solved. The related dimensional
network represents the state variables and the topology of the robot's
working space. In this paper two approaches to problem solution are
proposed. The first approach relies on the potential distribution of
attraction distributed around the moving target, acting as a unique
local extreme in the net, with the gradient of the state variables
directing the current flow toward the source of the potential heat. The
second approach considers two attractive and repulsive potential
sources to decrease the time of potential distribution. Computer
simulations have been carried out to interrogate the performance of
the proposed approaches.

In this note, we demonstrate explicit LU
factorizations of Toeplitz matrices for some small sizes. Furthermore,
we obtain the inverse of referred Toeplitz matrices by appling the
above-mentioned results.

With the increasing spread of computers and the internet among culturally, linguistically and geographically diverse communities, issues of internationalization and localization and becoming increasingly important. For some of the issues such as different scales for length and temperature, there is a well-developed measurement theory. For others such as date formats no such theory will be possible. This paper fills a gap by developing a measurement theory for a class of scales previously overlooked, based on discrete and interval-valued scales such as spanner and shoe sizes. The paper gives a theoretical foundation for a class of data representation problems.

In wavelet regression, choosing threshold value is a crucial issue. A too large value cuts too many coefficients resulting in over smoothing. Conversely, a too small threshold value allows many coefficients to be included in reconstruction, giving a wiggly estimate which result in under smoothing. However, the proper choice of threshold can be considered as a careful balance of these principles. This paper gives a very brief introduction to some thresholding selection methods. These methods include: Universal, Sure, Ebays, Two fold cross validation and level dependent cross validation. A simulation study on a variety of sample sizes, test functions, signal-to-noise ratios is conducted to compare their numerical performances using three different noise structures. For Gaussian noise, EBayes outperforms in all cases for all used functions while Two fold cross validation provides the best results in the case of long tail noise. For large values of signal-to-noise ratios, level dependent cross validation works well under correlated noises case. As expected, increasing both sample size and level of signal to noise ratio, increases estimation efficiency.

Linear two-point boundary value problem of order two is solved using extended cubic B-spline interpolation method. There is one free parameters, λ, that control the tension of the solution curve. For some λ, this method produced better results than cubic B-spline interpolation method.

This paper presents a new methodology to select test
cases from regression test suites. The selection strategy is based on
analyzing the dynamic behavior of the applications that written in
any programming language. Methods based on dynamic analysis are
more safe and efficient. We design a technique that combine the code
based technique and model based technique, to allow comparing the
object oriented of an application that written in any programming
language. We have developed a prototype tool that detect changes
and select test cases from test suite.

Classification is an interesting problem in functional
data analysis (FDA), because many science and application problems
end up with classification problems, such as recognition, prediction,
control, decision making, management, etc. As the high dimension
and high correlation in functional data (FD), it is a key problem to
extract features from FD whereas keeping its global characters, which
relates to the classification efficiency and precision to heavens. In this
paper, a novel automatic method which combined Genetic Algorithm
(GA) and classification algorithm to extract classification features is
proposed. In this method, the optimal features and classification model
are approached via evolutional study step by step. It is proved by
theory analysis and experiment test that this method has advantages in
improving classification efficiency, precision and robustness whereas
using less features and the dimension of extracted classification
features can be controlled.

Optical network uses a tool for routing called Latin
router. These routers use particular algorithms for routing. For
example, we can refer to LDF algorithm that uses backtracking (one
of CSP methods) for problem solving. In this paper, we proposed
new approached for completion routing table (DRA&CRA
algorithm) and compare with pervious proposed ways and showed
numbers of backtracking, blocking and run time for DRA algorithm
less than LDF and CRA algorithm.

In this paper, the Fuzzy Autocatalytic Set (FACS) is
composed into Omega Algebra by embedding the membership value
of fuzzy edge connectivity using the property of transitive affinity.
Then, the Omega Algebra of FACS is a transformation semigroup
which is a special class of semigroup is shown.

The link between Gröbner basis and linear algebra was
described by Lazard [4,5] where he realized the Gr┬¿obner basis
computation could be archived by applying Gaussian elimination over
Macaulay-s matrix .
In this paper, we indicate how same technique may be used to
SAGBI- Gröbner basis computations in invariant rings.

The aim of this paper is to review some of standard fact on Miura curves. We give some easy theorem in number theory to define Miura curves, then we present a new implementation of Arita algorithm for Miura curves.