SIP (Session Initiation Protocol), using HTML based
call control messaging which is quite simple and efficient, is being
replaced for VoIP networks recently. As for authentication and
authorization purposes there are many approaches and considerations
for securing SIP to eliminate forgery on the integrity of SIP
messages. On the other hand Elliptic Curve Cryptography has
significant advantages like smaller key sizes, faster computations on
behalf of other Public Key Cryptography (PKC) systems that obtain
data transmission more secure and efficient. In this work a new
approach is proposed for secure SIP authentication by using a public
key exchange mechanism using ECC. Total execution times and
memory requirements of proposed scheme have been improved in
comparison with non-elliptic approaches by adopting elliptic-based
key exchange mechanism.

Decision feedback equalizers are commonly employed
to reduce the error caused by intersymbol interference. Here, an adaptive
decision feedback equalizer is presented with a new adaptation algorithm.
The algorithm follows a block-based approach of normalized
least mean square (NLMS) algorithm with set-membership filtering
and achieves a significantly less computational complexity over its
conventional NLMS counterpart with set-membership filtering. It is
shown in the results that the proposed algorithm yields similar type
of bit error rate performance over a reasonable signal to noise ratio
in comparison with the latter one.

We show that in a two-channel sampling series expansion
of band-pass signals, any finitely many missing samples can
always be recovered via oversampling in a larger band-pass region.
We also obtain an analogous result for multi-channel oversampling
of harmonic signals.

Software developed for a specific customer under contract
typically undergoes a period of testing by the customer before
acceptance. This is known as user acceptance testing and the process
can reveal both defects in the system and requests for changes to
the product. This paper uses nonhomogeneous Poisson processes to
model a real user acceptance data set from a recently developed
system. In particular a split Poisson process is shown to provide an
excellent fit to the data. The paper explains how this model can be
used to aid the allocation of resources through the accurate prediction
of occurrences both during the acceptance testing phase and before
this activity begins.

With the extensive inclusion of document, especially
text, in the business systems, data mining does not cover the full
scope of Business Intelligence. Data mining cannot deliver its impact
on extracting useful details from the large collection of unstructured
and semi-structured written materials based on natural languages.
The most pressing issue is to draw the potential business intelligence
from text. In order to gain competitive advantages for the business, it
is necessary to develop the new powerful tool, text mining, to expand
the scope of business intelligence.
In this paper, we will work out the strong points of text mining in
extracting business intelligence from huge amount of textual
information sources within business systems. We will apply text
mining to each stage of Business Intelligence systems to prove that
text mining is the powerful tool to expand the scope of BI. After
reviewing basic definitions and some related technologies, we will
discuss the relationship and the benefits of these to text mining. Some
examples and applications of text mining will also be given. The
motivation behind is to develop new approach to effective and
efficient textual information analysis. Thus we can expand the scope
of Business Intelligence using the powerful tool, text mining.

The classification of the protein structure is commonly
not performed for the whole protein but for structural domains, i.e.,
compact functional units preserved during evolution. Hence, a first
step to a protein structure classification is the separation of the
protein into its domains. We approach the problem of protein domain
identification by proposing a novel graph theoretical algorithm. We
represent the protein structure as an undirected, unweighted and
unlabeled graph which nodes correspond the secondary structure
elements of the protein. This graph is call the protein graph. The
domains are then identified as partitions of the graph corresponding
to vertices sets obtained by the maximization of an objective function,
which mutually maximizes the cycle distributions found in the
partitions of the graph. Our algorithm does not utilize any other kind
of information besides the cycle-distribution to find the partitions. If
a partition is found, the algorithm is iteratively applied to each of
the resulting subgraphs. As stop criterion, we calculate numerically
a significance level which indicates the stability of the predicted
partition against a random rewiring of the protein graph. Hence,
our algorithm terminates automatically its iterative application. We
present results for one and two domain proteins and compare our
results with the manually assigned domains by the SCOP database
and differences are discussed.

It has been established that microRNAs (miRNAs) play
an important role in gene expression by post-transcriptional regulation
of messengerRNAs (mRNAs). However, the precise relationships
between microRNAs and their target genes in sense of numbers,
types and biological relevance remain largely unclear. Dissecting the
miRNA-target relationships will render more insights for miRNA
targets identification and validation therefore promote the understanding
of miRNA function. In miRBase, miRanda is the key
algorithm used for target prediction for Zebrafish. This algorithm
is high-throughput but brings lots of false positives (noise). Since
validation of a large scale of targets through laboratory experiments
is very time consuming, several computational methods for miRNA
targets validation should be developed. In this paper, we present an
integrative method to investigate several aspects of the relationships
between miRNAs and their targets with the final purpose of extracting
high confident targets from miRanda predicted targets pool. This is
achieved by using the techniques ranging from statistical tests to
clustering and association rules. Our research focuses on Zebrafish.
It was found that validated targets do not necessarily associate with
the highest sequence matching. Besides, for some miRNA families,
the frequency of their predicted targets is significantly higher in the
genomic region nearby their own physical location. Finally, in a case
study of dre-miR-10 and dre-miR-196, it was found that the predicted
target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR-
10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar
characteristics as validated target genes and therefore represent high
confidence target candidates.

In image processing and visualization, comparing two
bitmapped images needs to be compared from their pixels by matching
pixel-by-pixel. Consequently, it takes a lot of computational time
while the comparison of two vector-based images is significantly
faster. Sometimes these raster graphics images can be approximately
converted into the vector-based images by various techniques. After
conversion, the problem of comparing two raster graphics images
can be reduced to the problem of comparing vector graphics images.
Hence, the problem of comparing pixel-by-pixel can be reduced to
the problem of polynomial comparisons. In computer aided geometric
design (CAGD), the vector graphics images are the composition of
curves and surfaces. Curves are defined by a sequence of control
points and their polynomials. In this paper, the control points will be
considerably used to compare curves. The same curves after relocated
or rotated are treated to be equivalent while two curves after different
scaled are considered to be similar curves. This paper proposed an
algorithm for comparing the polynomial curves by using the control
points for equivalence and similarity. In addition, the geometric
object-oriented database used to keep the curve information has also
been defined in XML format for further used in curve comparisons.

In this paper, we proposed the direct method for converting
Finite-Impulse Response (FIR) filter with low nonzero tap
into Infinite-Impulse Response (IIR) filter using the pre-determined
table. The prony method is used by ghost cancellator which is IIR
approximation to FIR filter which is better performance than IIR and
have much larger calculation difference. The direct method for many
ghost combination with low nonzero tap of NTSC(National Television
System Committee) TV signal in Korea is described. The proposed
method is illustrated with an example.

We present a Large-Eddy simulation of a vortex cell
with circular shaped. The results show that the flow field can be sub
divided into four important zones, the shear layer above the cavity,
the stagnation zone, the vortex core in the cavity and the boundary
layer along the wall of the cavity. It is shown that the vortex core
consits of solid body rotation without much turbulence activity. The
vortex is mainly driven by high energy packets that are driven into the
cavity from the stagnation point region and by entrainment of fluid
from the cavity into the shear layer. The physics in the boundary
layer along the cavity-s wall seems to be far from that of a canonical
boundary layer which might be a crucial point for modelling this
flow.

The game of Maundy Block is the three-player variant
of Maundy Cake, a classical combinatorial game. Even though to
determine the solution of Maundy Cake is trivial, solving Maundy
Block is challenging because of the identification of queer games,
i.e., games where no player has a winning strategy.

In this paper, we propose a geometric modeling of
illumination on the patterned image containing etching transistor. This
image is captured by a commercial camera during the inspection of
a TFT-LCD panel. Inspection of defect is an important process in the
production of LCD panel, but the regional difference in brightness,
which has a negative effect on the inspection, is due to the uneven
illumination environment. In order to solve this problem, we present
a geometric modeling of illumination consisting of an interpolation
using the least squares method and 3D modeling using bezier surface.
Our computational time, by using the sampling method, is shorter
than the previous methods. Moreover, it can be further used to correct
brightness in every patterned image.

Statement of the automatic speech recognition
problem, the assignment of speech recognition and the application
fields are shown in the paper. At the same time as Azerbaijan speech,
the establishment principles of speech recognition system and the
problems arising in the system are investigated. The computing algorithms of speech features, being the main part
of speech recognition system, are analyzed. From this point of view,
the determination algorithms of Mel Frequency Cepstral Coefficients
(MFCC) and Linear Predictive Coding (LPC) coefficients expressing
the basic speech features are developed. Combined use of cepstrals of
MFCC and LPC in speech recognition system is suggested to
improve the reliability of speech recognition system. To this end, the
recognition system is divided into MFCC and LPC-based recognition
subsystems. The training and recognition processes are realized in
both subsystems separately, and recognition system gets the decision
being the same results of each subsystems. This results in decrease of
error rate during recognition. The training and recognition processes are realized by artificial
neural networks in the automatic speech recognition system. The
neural networks are trained by the conjugate gradient method. In the
paper the problems observed by the number of speech features at
training the neural networks of MFCC and LPC-based speech
recognition subsystems are investigated. The variety of results of neural networks trained from different
initial points in training process is analyzed. Methodology of
combined use of neural networks trained from different initial points
in speech recognition system is suggested to improve the reliability
of recognition system and increase the recognition quality, and
obtained practical results are shown.

Repetitive systems stand for a kind of systems that
perform a simple task on a fixed pattern repetitively, which are
widely spread in industrial fields. Hence, many researchers have been
interested in those systems, especially in the field of iterative learning
control (ILC). In this paper, we propose a finite-horizon tracking
control scheme for linear time-varying repetitive systems with uncertain
initial conditions. The scheme is derived both analytically
and numerically for state-feedback systems and only numerically for
output-feedback systems. Then, it is extended to stable systems with
input constraints. All numerical schemes are developed in the forms
of linear matrix inequalities (LMIs). A distinguished feature of the
proposed scheme from the existing iterative learning control is that
the scheme guarantees the tracking performance exactly even under
uncertain initial conditions. The simulation results demonstrate the
good performance of the proposed scheme.

It is an important task in Korean-English machine
translation to classify the gender of names correctly. When a sentence
is composed of two or more clauses and only one subject is given as a proper noun, it is important to find the gender of the proper noun
for correct translation of the sentence. This is because a singular pronoun has a gender in English while it does not in Korean. Thus,
in Korean-English machine translation, the gender of a proper noun should be determined. More generally, this task can be expanded into the classification of the general Korean names. This paper proposes a statistical method for this problem. By considering a name as just
a sequence of syllables, it is possible to get a statistics for each name from a collection of names. An evaluation of the proposed method
yields the improvement in accuracy over the simple looking-up of the
collection. While the accuracy of the looking-up method is 64.11%, that of the proposed method is 81.49%. This implies that the proposed
method is more plausible for the gender classification of the Korean names.

This paper describes a prototype aircraft that can fly
slowly, safely and transmit wireless video for tasks like reconnaissance,
surveillance and target acquisition. The aircraft is designed to
fly in closed quarters like forests, buildings, caves and tunnels which
are often spacious but GPS reception is poor. Envisioned is that a
small, safe and slow flying vehicle can assist in performing dull,
dangerous and dirty tasks like disaster mitigation, search-and-rescue
and structural damage assessment.

This paper describes a system, in which various methods of text summarizing can be adapted to Polish. A structure of the system is presented. A modular construction of the system and access to the system via the Internet are signaled.

In the paper a method of modeling text for Polish is
discussed. The method is aimed at transforming continuous input text
into a text consisting of sentences in so called canonical form, whose
characteristic is, among others, a complete structure as well as no
anaphora or ellipses. The transformation is lossless as to the content
of text being transformed. The modeling method has been worked
out for the needs of the Thetos system, which translates Polish
written texts into the Polish sign language. We believe that the
method can be also used in various applications that deal with the
natural language, e.g. in a text summary generator for Polish.

Bagging and boosting are among the most popular resampling ensemble methods that generate and combine a diversity of classifiers using the same learning algorithm for the base-classifiers. Boosting algorithms are considered stronger than bagging on noisefree data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using a voting methodology of bagging and boosting ensembles with 10 subclassifiers in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique was the most accurate.

This paper discusses an artificial mind model and its
applications. The mind model is based on some theories which assert
that emotion is an important function in human decision making. An
artificial mind model with emotion is built, and the model is applied to
action selection of autonomous agents. In three examples, the agents
interact with humans and their environments. The examples show the
proposed model effectively work in both virtual agents and real robots.

A genetic algorithm (GA) based feature subset
selection algorithm is proposed in which the correlation structure of
the features is exploited. The subset of features is validated according
to the classification performance. Features derived from the
continuous wavelet transform are potentially strongly correlated.
GA-s that do not take the correlation structure of features into
account are inefficient. The proposed algorithm forms clusters of
correlated features and searches for a good candidate set of clusters.
Secondly a search within the clusters is performed. Different
simulations of the algorithm on a real-case data set with strong
correlations between features show the increased classification
performance. Comparison is performed with a standard GA without
use of the correlation structure.

PARIS (Personal Archiving and Retrieving Image
System) is an experiment personal photograph library, which includes
more than 80,000 of consumer photographs accumulated within a
duration of approximately five years, metadata based on our proposed
MPEG-7 annotation architecture, Dozen Dimensional Digital Content
(DDDC), and a relational database structure. The DDDC architecture
is specially designed for facilitating the managing, browsing and
retrieving of personal digital photograph collections. In annotating
process, we also utilize a proposed Spatial and Temporal Ontology
(STO) designed based on the general characteristic of personal
photograph collections. This paper explains PRAIS system.

This paper provides a flexible way of controlling
Variable-Bit-Rate (VBR) of compressed digital video, applicable to
the new H264 video compression standard. The entire video
sequence is assessed in advance and the quantisation level is then set
such that bit rate (and thus the frame rate) remains within
predetermined limits compatible with the bandwidth of the
transmission system and the capabilities of the remote end, while at
the same time providing constant quality similar to VBR encoding.
A process for avoiding buffer starvation by selectively eliminating
frames from the encoded output at times when the frame rate is slow
(large number of bits per frame) will be also described. Finally, the
problem of buffer overflow will be solved by selectively eliminating
frames from the received input to the decoder. The decoder detects
the omission of the frames and resynchronizes the transmission by
monitoring time stamps and repeating frames if necessary.

An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. In this paper, a set of related HPRs is called a cluster and is represented by a HPR-tree. This paper discusses an algorithm based on cumulative learning scenario for dynamic structuring of clusters. The proposed scheme incrementally incorporates new knowledge into the set of clusters from the previous episodes and also maintains summary of clusters as Synopsis to be used in the future episodes. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested incremental structuring of clusters would be useful in mining data streams.

Research in quantum computation is looking for the consequences of having information encoding, processing and communication exploit the laws of quantum physics, i.e. the laws which govern the ultimate knowledge that we have, today, of the foreign world of elementary particles, as described by quantum mechanics. This paper starts with a short survey of the principles which underlie quantum computing, and of some of the major breakthroughs brought by the first ten to fifteen years of research in this domain; quantum algorithms and quantum teleportation are very biefly presented. The next sections are devoted to one among the many directions of current research in the quantum computation paradigm, namely quantum programming languages and their semantics. A few other hot topics and open problems in quantum information processing and communication are mentionned in few words in the concluding remarks, the most difficult of them being the physical implementation of a quantum computer. The interested reader will find a list of useful references at the end of the paper.

When architecting an application, key nonfunctional requirements such as performance, scalability, availability and security, which influence the architecture of the system, are some times not adequately addressed. Performance of the application may not be looked at until there is a concern. There are several problems with this reactive approach. If the system does not meet its performance objectives, the application is unlikely to be accepted by the stakeholders. This paper suggests an approach for performance modeling for web based J2EE and .Net applications to address performance issues early in the development life cycle. It also includes a Performance Modeling Case Study, with Proof-of-Concept (PoC) and implementation details for .NET and J2EE platforms.

Breast cancer detection techniques have been reported
to aid radiologists in analyzing mammograms. We note that most
techniques are performed on uncompressed digital mammograms.
Mammogram images are huge in size necessitating the use of
compression to reduce storage/transmission requirements. In this
paper, we present an algorithm for the detection of
microcalcifications in the JPEG2000 domain. The algorithm is based
on the statistical properties of the wavelet transform that the
JPEG2000 coder employs. Simulation results were carried out at
different compression ratios. The sensitivity of this algorithm ranges
from 92% with a false positive rate of 4.7 down to 66% with a false
positive rate of 2.1 using lossless compression and lossy compression
at a compression ratio of 100:1, respectively.

Data Mining aims at discovering knowledge out of
data and presenting it in a form that is easily comprehensible to
humans. One of the useful applications in Egypt is the Cancer
management, especially the management of Acute Lymphoblastic
Leukemia or ALL, which is the most common type of cancer in
children.
This paper discusses the process of designing a prototype that can
help in the management of childhood ALL, which has a great
significance in the health care field. Besides, it has a social impact
on decreasing the rate of infection in children in Egypt. It also
provides valubale information about the distribution and
segmentation of ALL in Egypt, which may be linked to the possible
risk factors.
Undirected Knowledge Discovery is used since, in the case of this
research project, there is no target field as the data provided is
mainly subjective. This is done in order to quantify the subjective
variables. Therefore, the computer will be asked to identify
significant patterns in the provided medical data about ALL. This
may be achieved through collecting the data necessary for the
system, determimng the data mining technique to be used for the
system, and choosing the most suitable implementation tool for the
domain.
The research makes use of a data mining tool, Clementine, so as to
apply Decision Trees technique. We feed it with data extracted from
real-life cases taken from specialized Cancer Institutes. Relevant
medical cases details such as patient medical history and diagnosis
are analyzed, classified, and clustered in order to improve the disease
management.

Blind signatures enable users to obtain valid signatures for a message without revealing its content to the signer. This paper presents a new blind signature scheme, i.e. identity-based blind signature scheme with message recovery. Due to the message recovery property, the new scheme requires less bandwidth than the identitybased blind signatures with similar constructions. The scheme is based on modified Weil/Tate pairings over elliptic curves, and thus requires smaller key sizes for the same level of security compared to previous approaches not utilizing bilinear pairings. Security and efficiency analysis for the scheme is provided in this paper.

P2P Networks are highly dynamic structures since
their nodes – peer users keep joining and leaving continuously. In the
paper, we study the effects of network change rates on query routing
efficiency. First we describe some background and an abstract system
model. The chosen routing technique makes use of cached metadata
from previous answer messages and also employs a mechanism for
broken path detection and metadata maintenance. Several metrics are
used to show that the protocol behaves quite well even with high rate
of node departures, but above a certain threshold it literally breaks
down and exhibits considerable efficiency degradation.

The given work is devoted to the description of
Information Technologies NAS of Azerbaijan created and
successfully maintained in Institute. On the basis of the decision of
board of the Supreme Certifying commission at the President of the
Azerbaijan Republic and Presidium of National Academy of
Sciences of the Azerbaijan Republic, the organization of training
courses on Computer Sciences for all post-graduate students and
dissertators of the republic, taking of examinations of candidate
minima, it was on-line entrusted to Institute of Information
Technologies of the National Academy of Sciences of Azerbaijan.
Therefore, teaching the computer sciences to post-graduate
students and dissertators a scientific - methodological manual on
effective application of new information technologies for research
works by post-graduate students and dissertators and taking of
candidate minima is carried out in the Educational Center.
Information and communication technologies offer new
opportunities and prospects of their application for teaching and
training. The new level of literacy demands creation of essentially
new technology of obtaining of scientific knowledge. Methods of
training and development, social and professional requirements,
globalization of the communicative economic and political projects
connected with construction of a new society, depends on a level of
application of information and communication technologies in the
educational process. Computer technologies develop ideas of
programmed training, open completely new, not investigated
technological ways of training connected to unique opportunities of
modern computers and telecommunications. Computer technologies
of training are processes of preparation and transfer of the
information to the trainee by means of computer. Scientific and
technical progress as well as global spread of the technologies
created in the most developed countries of the world is the main
proof of the leading role of education in XXI century. Information
society needs individuals having modern knowledge. In practice, all
technologies, using special technical information means (computer,
audio, video) are called information technologies of education.

The paper discusses complexity of component-based
development (CBD) of embedded systems. Although CBD has its
merits, it must be augmented with methods to control the complexities
that arise due to resource constraints, timeliness, and run-time deployment
of components in embedded system development. Software
component specification, system-level testing, and run-time reliability
measurement are some ways to control the complexity.

Much research into handwritten Thai character
recognition have been proposed, such as comparing heads of
characters, Fuzzy logic and structure trees, etc. This paper presents a
system of handwritten Thai character recognition, which is based on
the Ant-minor algorithm (data mining based on Ant colony
optimization). Zoning is initially used to determine each character.
Then three distinct features (also called attributes) of each character
in each zone are extracted. The attributes are Head zone, End point,
and Feature code. All attributes are used for construct the
classification rules by an Ant-miner algorithm in order to classify
112 Thai characters. For this experiment, the Ant-miner algorithm is
adapted, with a small change to increase the recognition rate. The
result of this experiment is a 97% recognition rate of the training set
(11200 characters) and 82.7% recognition rate of unseen data test
(22400 characters).

This paper proposes a new method for image searches and image indexing in databases with a color temperature histogram. The color temperature histogram can be used for performance improvement of content–based image retrieval by using a combination of color temperature and histogram. The color temperature histogram can be represented by a range of 46 colors. That is more than the color histogram and the dominant color temperature. Moreover, with our method the colors that have the same color temperature can be separated while the dominant color temperature can not. The results showed that the color temperature histogram retrieved an accurate image more often than the dominant color temperature method or color histogram method. This also took less time so the color temperature can be used for indexing and searching for images.

In this paper, we introduce GODYS-PC software
package for modeling, simulating and analyzing dynamic systems.
To illustrate the use of GODYS-PC we present a few examples
which concern modeling and simulating of engineering systems. In
order to compare GODYS-PC with widely used in academia and
industry Simulink®, the same examples are provided both in
GODYS-PC and Simulink®.

It is important problems to increase the detection rates
and reduce false positive rates in Intrusion Detection System (IDS).
Although preventative techniques such as access control and
authentication attempt to prevent intruders, these can fail, and as a
second line of defence, intrusion detection has been introduced. Rare
events are events that occur very infrequently, detection of rare
events is a common problem in many domains. In this paper we
propose an intrusion detection method that combines Rough set and
Fuzzy Clustering. Rough set has to decrease the amount of data and
get rid of redundancy. Fuzzy c-means clustering allow objects to
belong to several clusters simultaneously, with different degrees of
membership. Our approach allows us to recognize not only known
attacks but also to detect suspicious activity that may be the result of
a new, unknown attack. The experimental results on Knowledge
Discovery and Data Mining-(KDDCup 1999) Dataset show that the
method is efficient and practical for intrusion detection systems.

In this paper, genetic algorithm (GA) is proposed for
the design of an optimization algorithm to achieve the bandwidth
allocation of ATM network. In Broadband ISDN, the ATM is a highbandwidth;
fast packet switching and multiplexing technique. Using
ATM it can be flexibly reconfigure the network and reassign the
bandwidth to meet the requirements of all types of services. By
dynamically routing the traffic and adjusting the bandwidth
assignment, the average packet delay of the whole network can be
reduced to a minimum. M/M/1 model can be used to analyze the
performance.

In this paper, our concern is the management of mobile transactions in the shared area among many servers, when the mobile user moves from one cell to another in online partiallyreplicated distributed mobile database environment. We defined the concept of transaction and classified the different types of transactions. Based on this analysis, we propose an algorithm that handles the disconnection due to moving among sites.

A new digital transceiver circuit for asynchronous frame detection is proposed where both the transmitter and receiver contain all digital components, thereby avoiding possible use of conventional devices like monostable multivibrators with unstable external components such as resistances and capacitances. The proposed receiver circuit, in particular, uses a combinational logic block yielding an output which changes its state as soon as the start bit of a new frame is detected. This, in turn, helps in generating an efficient receiver sampling clock. A data latching circuit is also used in the receiver to latch the recovered data bits in any new frame. The proposed receiver structure is also extended from 4- bit information to any general n data bits within a frame with a common expression for the output of the combinational logic block. Performance of the proposed hardware design is evaluated in terms of time delay, reliability and robustness in comparison with the standard schemes using monostable multivibrators. It is observed from hardware implementation that the proposed circuit achieves almost 33 percent speed up over any conventional circuit.

Self-efficacy, self-reliance, and motivation were
examined in a quasi-experimental study with 178 sophomore
university students. Participants used an interactive cardiovascular
anatomy and physiology CD-ROM, and completed a 15-item
questionnaire. Reliability of the questionnaire was established using
Cronbach-s alpha. Post-tests and course grades were examined using
a t-test, demonstrating no significance. Results of an item-to-item
analysis of the questionnaire showed overall satisfaction with the
teaching methodology and varied results for self-efficacy, selfreliance,
and motivation. Kendall-s Tau was calculated for all items
in the questionnaire.

In this presentation, we discuss the use of information technologies in the area of special education for teaching individuals with learning disabilities. Application software which was developed for this purpose is used to demonstrate the applicability of a database integrated information processing system to alleviate the burden of educators. The software allows the preparation of individualized education programs based on the predefined objectives, goals and behaviors.

Assessment of IEP (Individual Education Plan) is an
important stage in the area of special education. This paper deals
with this problem by introducing computer software which process
the data gathered from application of IEP. The software is intended
to be used by special education institution in Turkey and allows
assessment of school and family trainings. The software has a user
friendly interface and its design includes graphical developer tools.

Process measurement is the task of empirically and objectively assigning numbers to the properties of business processes in such a way as to describe them. Desirable attributes to study and measure include complexity, cost, maintainability, and reliability. In our work we will focus on investigating process complexity. We define process complexity as the degree to which a business process is difficult to analyze, understand or explain. One way to analyze a process- complexity is to use a process control-flow complexity measure. In this paper, an attempt has been made to evaluate the control-flow complexity measure in terms of Weyuker-s properties. Weyuker-s properties must be satisfied by any complexity measure to qualify as a good and comprehensive one.

A suitable e-learning system management needs to
carry out a web-information system in order to allow integrated
fruition of data and metadata concerning the activities typical of elearning
environment. The definition of a “web information system"
for e-learning takes advantage of the potentialities of Web
technologies both as for the access to metadata present on the several
platforms, and as for the implementation of courseware which make
up the relative didactic environment. What information systems have
in common is the technological environment on which they are
generally implemented and the use of metadata in order to structure
information at all cognitive and organization levels. In this work we
are going to define a methodology for the implementation of a
specific web information system for an e-learning environment.

The various types of frequent pattern discovery
problem, namely, the frequent itemset, sequence and graph mining
problems are solved in different ways which are, however, in certain
aspects similar. The main approach of discovering such patterns can
be classified into two main classes, namely, in the class of the levelwise
methods and in that of the database projection-based methods.
The level-wise algorithms use in general clever indexing structures
for discovering the patterns. In this paper a new approach is proposed
for discovering frequent sequences and tree-like patterns efficiently
that is based on the level-wise issue. Because the level-wise
algorithms spend a lot of time for the subpattern testing problem, the
new approach introduces the idea of using automaton theory to solve
this problem.

The most important subtype of non-Hodgkin-s
lymphoma is the Diffuse Large B-Cell Lymphoma. Approximately
40% of the patients suffering from it respond well to therapy,
whereas the remainder needs a more aggressive treatment, in order to
better their chances of survival. Data Mining techniques have helped
to identify the class of the lymphoma in an efficient manner. Despite
that, thousands of genes should be processed to obtain the results.
This paper presents a comparison of the use of various attribute
selection methods aiming to reduce the number of genes to be
searched, looking for a more effective procedure as a whole.

Due to their high power-to-weight ratio and low cost,
pneumatic actuators are attractive for robotics and automation
applications; however, achieving fast and accurate control of their
position have been known as a complex control problem. A
methodology for obtaining high position accuracy with a linear
pneumatic actuator is presented. During experimentation with a
number of PID classical control approaches over many operations of
the pneumatic system, the need for frequent manual re-tuning of the
controller could not be eliminated. The reason for this problem is
thermal and energy losses inside the cylinder body due to the
complex friction forces developed by the piston displacements.
Although PD controllers performed very well over short periods, it
was necessary in our research project to introduce some form of
automatic gain-scheduling to achieve good long-term performance.
We chose a fuzzy logic system to do this, which proved to be an
easily designed and robust approach. Since the PD approach showed
very good behaviour in terms of position accuracy and settling time,
it was incorporated into a modified form of the 1st order Tagaki-
Sugeno fuzzy method to build an overall controller. This fuzzy gainscheduler
uses an input variable which automatically changes the PD
gain values of the controller according to the frequency of repeated
system operations. Performance of the new controller was
significantly improved and the need for manual re-tuning was
eliminated without a decrease in performance. The performance of
the controller operating with the above method is going to be tested
through a high-speed web network (GRID) for research purposes.

This paper deals with automatic sentence modality
recognition in French. In this work, only prosodic features are
considered. The sentences are recognized according to the three
following modalities: declarative, interrogative and exclamatory
sentences. This information will be used to animate a talking head for
deaf and hearing-impaired children. We first statistically study a real
radio corpus in order to assess the feasibility of the automatic
modeling of sentence types. Then, we test two sets of prosodic
features as well as two different classifiers and their combination. We
further focus our attention on questions recognition, as this modality
is certainly the most important one for the target application.

This paper presents a novel genetic algorithm, termed
the Optimum Individual Monogenetic Algorithm (OIMGA) and
describes its hardware implementation. As the monogenetic strategy
retains only the optimum individual, the memory requirement is
dramatically reduced and no crossover circuitry is needed, thereby
ensuring the requisite silicon area is kept to a minimum.
Consequently, depending on application requirements, OIMGA
allows the investigation of solutions that warrant either larger GA
populations or individuals of greater length. The results given in this
paper demonstrate that both the performance of OIMGA and its
convergence time are superior to those of existing hardware GA
implementations. Local convergence is achieved in OIMGA by
retaining elite individuals, while population diversity is ensured by
continually searching for the best individuals in fresh regions of the
search space.

This paper presents the visual control flow support of Visual Modeling and Transformation System (VMTS), which facilitates composing complex model transformations out of simple transformation steps and executing them. The VMTS Visual Control Flow Language (VCFL) uses stereotyped activity diagrams to specify control flow structures and OCL constraints to choose between different control flow branches. This work discusses the termination properties of VCFL and provides an algorithm to support the termination analysis of VCFL transformations.

We are proposing a simple watermarking method
based on visual cryptography. The method is based on selection of
specific pixels from the original image instead of random selection of
pixels as per Hwang [1] paper. Verification information is generated
which will be used to verify the ownership of the image without the
need to embed the watermark pattern into the original digital data.
Experimental results show the proposed method can recover the
watermark pattern from the marked data even if some changes are
made to the original digital data.

We present a novel scheme to recognize isolated speech
signals using certain statistical parameters derived from those signals.
The determination of the statistical estimates is based on extracted
signal information rather than the original signal information in
order to reduce the computational complexity. Subtle details of
these estimates, after extracting the speech signal from ambience
noise, are first exploited to segregate the polysyllabic words from
the monosyllabic ones. Precise recognition of each distinct word is
then carried out by analyzing the histogram, obtained from these
information.

Recently, much research has been conducted for
security for wireless sensor networks and ubiquitous computing.
Security issues such as authentication and data integrity are major
requirements to construct sensor network systems. Advanced
Encryption Standard (AES) is considered as one of candidate
algorithms for data encryption in wireless sensor networks. In this
paper, we will present the hardware architecture to implement low
power AES crypto module. Our low power AES crypto module has
optimized architecture of data encryption unit and key schedule unit
which could be applicable to wireless sensor networks. We also details
low power design methods used to design our low power AES crypto
module.

EPC Class-1 Generation-2 UHF tags, one of Radio
frequency identification or RFID tag types, is expected that most
companies are planning to use it in the supply chain in the short term
and in consumer packaging in the long term due to its inexpensive
cost. Because of the very cost, however, its resources are extremely
scarce and it is hard to have any valuable security algorithms in it. It
causes security vulnerabilities, in particular cloning the tags for
counterfeits. In this paper, we propose a product authentication
solution for anti-counterfeiting at application level in the supply chain
and mobile RFID environment. It aims to become aware of
distribution of spurious products with fake RFID tags and to provide a
product authentication service to general consumers with mobile
RFID devices like mobile phone or PDA which has a mobile RFID
reader. We will discuss anti-counterfeiting mechanisms which are
required to our proposed solution and address requirements that the
mechanisms should have.

In this paper, we design an integration security system
that provides authentication service, authorization service, and
management service of security data and a unified interface for the
management service. The interface is originated from XKMS protocol
and is used to manage security data such as XACML policies, SAML
assertions and other authentication security data including public keys.
The system includes security services such as authentication,
authorization and delegation of authentication by employing SAML
and XACML based on security data such as authentication data,
attributes information, assertions and polices managed with the
interface in the system. It also has SAML producer that issues
assertions related on the result of the authentication and the
authorization services.

In this paper we describes the authentication for DHCP
(Dynamic Host Configuration Protocol) message which provides the
efficient key management and reduces the danger replay attack without
an additional packet for a replay attack. And the authentication for
DHCP message supports mutual authentication and provides both
entity authentication and message authentication. We applied the
authentication for DHCP message to the home network environments
and tested through a home gateway.

We introduce a new interactive 3D simulator of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). Using state-of-the-art computer animation technology we have modeled and rigged a physiologically accurate 3D model of the eyes, eyelids, and eyebrow regions and we have 'optimized' it for use with an interactive and web deliverable platform. In addition, we have realized a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D simulator of eye motion and ocular expression is, to our knowledge, the most advanced/realistic available so far for applications in character animation and medical pedagogy.

The very nonlinear nature of the generator and system
behaviour following a severe disturbance precludes the use of
classical linear control technique. In this paper, a new approach of
nonlinear control is proposed for transient and steady state stability
analysis of a synchronous generator. The control law of the generator
excitation is derived from the basis of Lyapunov stability criterion.
The overall stability of the system is shown using Lyapunov
technique. The application of the proposed controller to simulated
generator excitation control under a large sudden fault and wide
range of operating conditions demonstrates that the new control
strategy is superior to conventional automatic voltage regulator
(AVR), and show very promising results.

In this study, the performance of a high-frequency arc
welding machine including a two-switch inverter is analyzed. The
control of the system is achieved using two different control
techniques i- fuzzy logic control (FLC) ii- state space averaging
based sliding control. Fuzzy logic control does not need accurate
mathematical model of a plant and can be used in nonlinear
applications. The second method needs the mathematical model of
the system. In this method the state space equations of the system are
derived for two different “on" and “off" states of the switches. The
derived state equations are combined with the sliding control rule
considering the duty-cycle of the converter. The performance of the
system is analyzed by simulating the system using SIMULINK tool
box of MATLAB. The simulation results show that fuzzy logic
controller is more robust and less sensitive to parameter variations.

The home in these days has not one computer connected to the Internet but rather a network of many devices within the home, and that network might be connected to the Internet. In such an environment, the potential for attacks is greatly increased. The general security technology can not apply because of the use of various wired and wireless network, middleware and protocol in digital home environment and a restricted system resource of home information appliances. To offer secure home services home network environments have need of access control for various home devices and information when users want to access. Therefore home network access control for user authorization is a very important issue. In this paper we propose access control model using RBAC in home network environments to provide home users with secure home services.

Clustering techniques have been used by many intelligent software agents to group similar access patterns of the Web users into high level themes which express users intentions and interests. However, such techniques have been mostly focusing on one salient feature of the Web document visited by the user, namely the extracted keywords. The major aim of these techniques is to come up with an optimal threshold for the number of keywords needed to produce more focused themes. In this paper we focus on both keyword and similarity thresholds to generate themes with concentrated themes, and hence build a more sound model of the user behavior. The purpose of this paper is two fold: use distance based clustering methods to recognize overall themes from the Proxy log file, and suggest an efficient cut off levels for the keyword and similarity thresholds which tend to produce more optimal clusters with better focus and efficient size.

In this paper we present a method for gene ranking
from DNA microarray data. More precisely, we calculate the correlation
networks, which are unweighted and undirected graphs, from
microarray data of cervical cancer whereas each network represents
a tissue of a certain tumor stage and each node in the network
represents a gene. From these networks we extract one tree for
each gene by a local decomposition of the correlation network. The
interpretation of a tree is that it represents the n-nearest neighbor
genes on the n-th level of a tree, measured by the Dijkstra distance,
and, hence, gives the local embedding of a gene within the correlation
network. For the obtained trees we measure the pairwise similarity
between trees rooted by the same gene from normal to cancerous
tissues. This evaluates the modification of the tree topology due to
progression of the tumor. Finally, we rank the obtained similarity
values from all tissue comparisons and select the top ranked genes.
For these genes the local neighborhood in the correlation networks
changes most between normal and cancerous tissues. As a result
we find that the top ranked genes are candidates suspected to be
involved in tumor growth and, hence, indicates that our method
captures essential information from the underlying DNA microarray
data of cervical cancer.

Due to the tremendous amount of information provided
by the World Wide Web (WWW) developing methods for mining
the structure of web-based documents is of considerable interest. In
this paper we present a similarity measure for graphs representing
web-based hypertext structures. Our similarity measure is mainly
based on a novel representation of a graph as linear integer strings,
whose components represent structural properties of the graph. The
similarity of two graphs is then defined as the optimal alignment of
the underlying property strings. In this paper we apply the well known
technique of sequence alignments for solving a novel and challenging
problem: Measuring the structural similarity of generalized trees.
In other words: We first transform our graphs considered as high
dimensional objects in linear structures. Then we derive similarity
values from the alignments of the property strings in order to
measure the structural similarity of generalized trees. Hence, we
transform a graph similarity problem to a string similarity problem for
developing a efficient graph similarity measure. We demonstrate that
our similarity measure captures important structural information by
applying it to two different test sets consisting of graphs representing
web-based document structures.

In this paper we propose new method for
simultaneous generating multiple quantiles corresponding to given
probability levels from data streams and massive data sets. This
method provides a basis for development of single-pass low-storage
quantile estimation algorithms, which differ in complexity, storage
requirement and accuracy. We demonstrate that such algorithms may
perform well even for heavy-tailed data.

There is a general feeling that Internet crime is an
advanced type of crime that has not yet infiltrated developing
countries like Uganda. The carefree nature of the Internet in which
anybody publishes anything at anytime poses a serious security threat
for any nation. Unfortunately, there are no formal records about this
type of crime for Uganda. Could this mean that it does not exist
there? The author conducted an independent research to ascertain
whether cyber crimes have affected people in Uganda and if so, to
discover where they are reported. This paper highlights the findings.

Internet Access Technologies (IAT) provide a means
through which Internet can be accessed. The choice of a suitable
Internet technology is increasingly becoming an important issue to
ISP clients. Currently, the choice of IAT is based on discretion and
intuition of the concerned managers and the reliance on ISPs. In this
paper we propose a model and designs algorithms that are used in the
Internet access technology specification. In the proposed model, three
ranking approaches are introduced; concurrent ranking, stepwise
ranking and weighted ranking. The model ranks the IAT based on
distance measures computed in ascending order while the global
ranking system assigns weights to each IAT according to the position
held in each ranking technique, determines the total weight of a
particular IAT and ranks them in descending order. The final output
is an objective ranking of IAT in descending order.

The goal of admission control is to support the Quality
of Service demands of real-time applications via resource reservation
in IP networks. In this paper we introduce a novel Dynamic
Admission Control (DAC) mechanism for IP networks. The DAC
dynamically allocates network resources using the previous network
pattern for each path and uses the dynamic admission algorithm to
improve bandwidth utilization using bandwidth brokers. We evaluate
the performance of the proposed mechanism through trace-driven
simulation experiments in view point of blocking probability,
throughput and normalized utilization.

In communication networks where communication nodes are connected with finite capacity transmission links, the packet inter-arrival times are strongly correlated with the packet length and the link capacity (or the packet service time). Such correlation affects the system performance significantly, but little attention has been paid to this issue. In this paper, we propose a mathematical framework to study the impact of the correlation between the packet service times and the packet inter-arrival times on system performance. With our mathematical model, we analyze the system performance, e.g., the unfinished work of the system, and show that the correlation affects the system performance significantly. Some numerical examples are also provided.

Distributed wireless sensor network consist on several
scattered nodes in a knowledge area. Those sensors have as its only
power supplies a pair of batteries that must let them live up to five
years without substitution. That-s why it is necessary to develop
some power aware algorithms that could save battery lifetime as
much as possible. In this is document, a review of power aware
design for sensor nodes is presented. As example of implementations,
some resources and task management, communication, topology
control and routing protocols are named.

In the world of Peer-to-Peer (P2P) networking
different protocols have been developed to make the resource sharing
or information retrieval more efficient. The SemPeer protocol is a
new layer on Gnutella that transforms the connections of the nodes
based on semantic information to make information retrieval more
efficient. However, this transformation causes high clustering in the
network that decreases the number of nodes reached, therefore the
probability of finding a document is also decreased. In this paper we
describe a mathematical model for the Gnutella and SemPeer
protocols that captures clustering-related issues, followed by a
proposition to modify the SemPeer protocol to achieve moderate
clustering. This modification is a sort of link management for the
individual nodes that allows the SemPeer protocol to be more
efficient, because the probability of a successful query in the P2P
network is reasonably increased. For the validation of the models, we
evaluated a series of simulations that supported our results.

Many electronic voting systems, classified mainly as homomorphic cryptography based, mix-net based and blind signature based, appear after the eighties when zero knowledge proofs were introduced. The common ground for all these three systems is that none of them works without real time cryptologic calculations that should be held on a server. As far as known, the agent-based approach has not been used in a secure electronic voting system. In this study, an agent-based electronic voting schema, which does not contain real time calculations on the server side, is proposed. Conventional cryptologic methods are used in the proposed schema and some of the requirements of an electronic voting system are constructed within the schema. The schema seems quite secure if the used cryptologic methods and agents are secure. In this paper, proposed schema will be explained and compared with already known electronic voting systems.

For future Broad band ISDN, Asynchronous Transfer
Mode (ATM) is designed not only to support a wide range of traffic
classes with diverse flow characteristics, but also to guarantee the
different quality of service QOS requirements. The QOS may be
measured in terms of cell loss probability and maximum cell delay.
In this paper, ATM networks in which the virtual path (VP)
concept is implemented are considered. By applying the Markov
Deterministic process method, an efficient algorithm to compute the
minimum capacity required to satisfy the QOS requirements when
multiple classes of on-off are multiplexed on to a single VP. Using
the result, we then proposed a simple algorithm to determine different
combinations of VP to achieve the optimum of the total capacity
required for satisfying the individual QOS requirements (loss- delay).

Discovery schools in Jordan are connected in one flat
ATM bridge network. All Schools connected to the network will hear
broadcast traffic. High percentage of unwanted traffic such as
broadcast, consumes the bandwidth between schools and QRC.
Routers in QRC have high CPU utilization. The number of
connections on the router is very high, and may exceed recommend
manufacturing specifications. One way to minimize number of
connections to the routers in QRC, and minimize broadcast traffic is
to use PPPoE. In this study, a PPPoE solution has been presented
which shows high performance for the clients when accessing the
school server resources. Despite the large number of the discovery
schools at MoE, the experimental results show that the PPPoE
solution is able to yield a satisfactory performance for each client at
the school and noticeably reduce the traffic broadcast to the QRC.

Semantic Web services will enable the semiautomatic
and automatic annotation, advertisement, discovery,
selection, composition, and execution of inter-organization business
logic, making the Internet become a common global platform where
organizations and individuals communicate with each other to carry
out various commercial activities and to provide value-added
services. There is a growing consensus that Web services alone will
not be sufficient to develop valuable solutions due the degree of
heterogeneity, autonomy, and distribution of the Web. This paper
deals with two of the hottest R&D and technology areas currently
associated with the Web – Web services and the Semantic Web. It
presents the synergies that can be created between Web Services and
Semantic Web technologies to provide a new generation of eservices.

Most of fuzzy clustering algorithms have some
discrepancies, e.g. they are not able to detect clusters with convex
shapes, the number of the clusters should be a priori known, they
suffer from numerical problems, like sensitiveness to the
initialization, etc. This paper studies the synergistic combination of
the hierarchical and graph theoretic minimal spanning tree based
clustering algorithm with the partitional Gath-Geva fuzzy clustering
algorithm. The aim of this hybridization is to increase the robustness
and consistency of the clustering results and to decrease the number
of the heuristically defined parameters of these algorithms to
decrease the influence of the user on the clustering results. For the
analysis of the resulted fuzzy clusters a new fuzzy similarity measure
based tool has been presented. The calculated similarities of the
clusters can be used for the hierarchical clustering of the resulted
fuzzy clusters, which information is useful for cluster merging and
for the visualization of the clustering results. As the examples used
for the illustration of the operation of the new algorithm will show,
the proposed algorithm can detect clusters from data with arbitrary
shape and does not suffer from the numerical problems of the
classical Gath-Geva fuzzy clustering algorithm.

During the last decade some long lasting changes and
developments are shaping the global society. The world is entering a
new society which is already named as information or knowledge
society. In the paper, information/knowledge society is elaborated
first. Starting in the year 2000, European Union has initiated some
special projects such as eEurope and eEurope+ and activities such as
Bologna Process and Socrates/Erasmus Program . The paper will
review these activites in relation with information or knowledge
society . Before paper ends with a conclusion, some views relevant
to the topic are also presented.

The security of computer networks plays a strategic
role in modern computer systems. Intrusion Detection Systems (IDS)
act as the 'second line of defense' placed inside a protected
network, looking for known or potential threats in network traffic
and/or audit data recorded by hosts. We developed an Intrusion
Detection System using LAMSTAR neural network to learn patterns
of normal and intrusive activities, to classify observed system
activities and compared the performance of LAMSTAR IDS with
other classification techniques using 5 classes of KDDCup99 data.
LAMSAR IDS gives better performance at the cost of high
Computational complexity, Training time and Testing time, when
compared to other classification techniques (Binary Tree classifier,
RBF classifier, Gaussian Mixture classifier). we further reduced the
Computational Complexity of LAMSTAR IDS by reducing the
dimension of the data using principal component analysis which in
turn reduces the training and testing time with almost the same
performance.

This paper deals with the application for contentbased
image retrieval to extract color feature from natural images
stored in the image database by segmenting the image through
clustering. We employ a class of nonparametric techniques in which
the data points are regarded as samples from an unknown probability
density. Explicit computation of the density is avoided by using the
mean shift procedure, a robust clustering technique, which does not
require prior knowledge of the number of clusters, and does not
constrain the shape of the clusters. A non-parametric technique for
the recovery of significant image features is presented and
segmentation module is developed using the mean shift algorithm to
segment each image. In these algorithms, the only user set parameter
is the resolution of the analysis and either gray level or color images
are accepted as inputs. Extensive experimental results illustrate
excellent performance.

Structural representation and technology mapping of
a Boolean function is an important problem in the design of nonregenerative
digital logic circuits (also called combinational logic
circuits). Library aware function manipulation offers a solution to
this problem. Compact multi-level representation of binary networks,
based on simple circuit structures, such as AND-Inverter Graphs
(AIG) [1] [5], NAND Graphs, OR-Inverter Graphs (OIG), AND-OR
Graphs (AOG), AND-OR-Inverter Graphs (AOIG), AND-XORInverter
Graphs, Reduced Boolean Circuits [8] does exist in
literature. In this work, we discuss a novel and efficient graph
realization for combinational logic circuits, represented using a
NAND-NOR-Inverter Graph (NNIG), which is composed of only
two-input NAND (NAND2), NOR (NOR2) and inverter (INV) cells.
The networks are constructed on the basis of irredundant disjunctive
and conjunctive normal forms, after factoring, comprising terms with
minimum support. Construction of a NNIG for a non-regenerative
function in normal form would be straightforward, whereas for the
complementary phase, it would be developed by considering a virtual
instance of the function. However, the choice of best NNIG for a
given function would be based upon literal count, cell count and
DAG node count of the implementation at the technology
independent stage. In case of a tie, the final decision would be made
after extracting the physical design parameters.
We have considered AIG representation for reduced disjunctive
normal form and the best of OIG/AOG/AOIG for the minimized
conjunctive normal forms. This is necessitated due to the nature of
certain functions, such as Achilles- heel functions. NNIGs are found
to exhibit 3.97% lesser node count compared to AIGs and
OIG/AOG/AOIGs; consume 23.74% and 10.79% lesser library cells
than AIGs and OIG/AOG/AOIGs for the various samples considered.
We compare the power efficiency and delay improvement achieved
by optimal NNIGs over minimal AIGs and OIG/AOG/AOIGs for
various case studies. In comparison with functionally equivalent,
irredundant and compact AIGs, NNIGs report mean savings in power
and delay of 43.71% and 25.85% respectively, after technology
mapping with a 0.35 micron TSMC CMOS process. For a
comparison with OIG/AOG/AOIGs, NNIGs demonstrate average
savings in power and delay by 47.51% and 24.83%. With respect to
device count needed for implementation with static CMOS logic
style, NNIGs utilize 37.85% and 33.95% lesser transistors than their
AIG and OIG/AOG/AOIG counterparts.

This paper discusses a new, systematic approach to
the synthesis of a NP-hard class of non-regenerative Boolean
networks, described by FON[FOFF]={mi}[{Mi}], where for every
mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such
that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where
'n' represents the number of distinct primary inputs). The method
automatically ensures exact minimization for certain important selfdual
functions with 2n-1 points in its one-set. The elements meant for
grouping are determined from a newly proposed weighted incidence
matrix. Then the binary value corresponding to the candidate pair is
correlated with the proposed binary value matrix to enable direct
synthesis. We recommend algebraic factorization operations as a post
processing step to enable reduction in literal count. The algorithm
can be implemented in any high level language and achieves best
cost optimization for the problem dealt with, irrespective of the
number of inputs. For other cases, the method is iterated to
subsequently reduce it to a problem of O(n-1), O(n-2),.... and then
solved. In addition, it leads to optimal results for problems exhibiting
higher degree of adjacency, with a different interpretation of the
heuristic, and the results are comparable with other methods.
In terms of literal cost, at the technology independent stage, the
circuits synthesized using our algorithm enabled net savings over
AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of-
Products or ESOP forms) and AND-OR-EXOR logic by 45.57%,
41.78% and 41.78% respectively for the various problems.
Circuit level simulations were performed for a wide variety of
case studies at 3.3V and 2.5V supply to validate the performance of
the proposed method and the quality of the resulting synthesized
circuits at two different voltage corners. Power estimation was
carried out for a 0.35micron TSMC CMOS process technology. In
comparison with AOI logic, the proposed method enabled mean
savings in power by 42.46%. With respect to AND-EXOR logic, the
proposed method yielded power savings to the tune of 31.88%, while
in comparison with AND-OR-EXOR level networks; average power
savings of 33.23% was obtained.

To analyze the behavior of Petri nets, the accessibility
graph and Model Checking are widely used. However, if the
analyzed Petri net is unbounded then the accessibility graph becomes
infinite and Model Checking can not be used even for small Petri
nets. ECATNets [2] are a category of algebraic Petri nets. The main
feature of ECATNets is their sound and complete semantics based on
rewriting logic [8] and its language Maude [9]. ECATNets analysis
may be done by using techniques of accessibility analysis and Model
Checking defined in Maude. But, these two techniques supported by
Maude do not work also with infinite-states systems. As a category
of Petri nets, ECATNets can be unbounded and so infinite systems.
In order to know if we can apply accessibility analysis and Model
Checking of Maude to an ECATNet, we propose in this paper an
algorithm allowing the detection if the ECATNet is bounded or not.
Moreover, we propose a rewriting logic based tool implementing this
algorithm. We show that the development of this tool using the
Maude system is facilitated thanks to the reflectivity of the rewriting
logic. Indeed, the self-interpretation of this logic allows us both the
modelling of an ECATNet and acting on it.

Interaction Model plays an important role in Modelbased
Intelligent Interface Agent Architecture for developing
Intelligent User Interface. In this paper we are presenting some
improvements in the algorithms for development interaction model of
interface agent including: the action segmentation algorithm, the
action pair selection algorithm, the final action pair selection
algorithm, the interaction graph construction algorithm and the
probability calculation algorithm. The analysis of the algorithms also
presented. At the end of this paper, we introduce an experimental
program called “Personal Transfer System".

The paper describes a self supervised parallel self organizing neural network (PSONN) architecture for true color image segmentation. The proposed architecture is a parallel extension of the standard single self organizing neural network architecture (SONN) and comprises an input (source) layer of image information, three single self organizing neural network architectures for segmentation of the different primary color components in a color image scene and one final output (sink) layer for fusion of the segmented color component images. Responses to the different shades of color components are induced in each of the three single network architectures (meant for component level processing) by applying a multilevel version of the characteristic activation function, which maps the input color information into different shades of color components, thereby yielding a processed component color image segmented on the basis of the different shades of component colors. The number of target classes in the segmented image corresponds to the number of levels in the multilevel activation function. Since the multilevel version of the activation function exhibits several subnormal responses to the input color image scene information, the system errors of the three component network architectures are computed from some subnormal linear index of fuzziness of the component color image scenes at the individual level. Several multilevel activation functions are employed for segmentation of the input color image scene using the proposed network architecture. Results of the application of the multilevel activation functions to the PSONN architecture are reported on three real life true color images. The results are substantiated empirically with the correlation coefficients between the segmented images and the original images.

This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.

Modularized design approach can facilitate the
modeling of complex systems and support behavior analysis and
simulation in an iterative and thus complex engineering process, by
using encapsulated submodels of components and of their interfaces.
Therefore it can improve the design efficiency and simplify the
solving complicated problem. Multi-drivers off-road vehicle is
comparatively complicated. Driving-line is an important core part to a
vehicle; it has a significant contribution to the performance of a
vehicle. Multi-driver off-road vehicles have complex driving-line, so
its performance is heavily dependent on the driving-line. A typical
off-road vehicle-s driving-line system consists of torque converter,
transmission, transfer case and driving-axles, which transfer the
power, generated by the engine and distribute it effectively to the
driving wheels according to the road condition. According to its main
function, this paper puts forward a modularized approach for
designing and evaluation of vehicle-s driving-line. It can be used to
effectively estimate the performance of driving-line during concept
design stage. Through appropriate analysis and assessment method, an
optimal design can be reached. This method has been applied to the
practical vehicle design, it can improve the design efficiency and is
convenient to assess and validate the performance of a vehicle,
especially of multi-drivers off-road vehicle.

This paper presents a protocol aiming at proving that an encryption system contains structural weaknesses without disclosing any information on those weaknesses. A verifier can check in a polynomial time that a given property of the cipher system output has been effectively realized. This property has been chosen by the prover in such a way that it cannot been achieved by known attacks or exhaustive search but only if the prover indeed knows some undisclosed weaknesses that may effectively endanger the cryptosystem security. This protocol has been denoted zero-knowledge-like proof of cryptanalysis. In this paper, we apply this protocol to the Bluetooth core encryption algorithm E0, used in many mobile environments and thus we suggest that its security can seriously be put into question.

This paper examines the implementation of RC5 block cipher for digital images along with its detailed security analysis. A complete specification for the method of application of the RC5 block cipher to digital images is given. The security analysis of RC5 block cipher for digital images against entropy attack, bruteforce, statistical, and differential attacks is explored from strict cryptographic viewpoint. Experiments and results verify and prove that RC5 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC5 block cipher algorithm.

Mobile Ad hoc Networks is an autonomous system of
mobile nodes connected by multi-hop wireless links without
centralized infrastructure support. As mobile communication gains
popularity, the need for suitable ad hoc routing protocols will
continue to grow. Efficient dynamic routing is an important research
challenge in such a network. Bandwidth constrained mobile devices
use on-demand approach in their routing protocols because of its
effectiveness and efficiency. Many researchers have conducted
numerous simulations for comparing the performance of these
protocols under varying conditions and constraints. Most of them are
not aware of MAC Protocols, which will impact the relative
performance of routing protocols considered in different network
scenarios. In this paper we investigate the choice of MAC protocols
affects the relative performance of ad hoc routing protocols under
different scenarios. We have evaluated the performance of these
protocols using NS2 simulations. Our results show that the
performance of routing protocols of ad hoc networks will suffer when
run over different MAC Layer protocols.

Optical Bursts Switching (OBS) is a relatively new
optical switching paradigm. Contention and burst loss in OBS
networks are major concerns. To resolve contentions, an interesting
alternative to discarding the entire data burst is to partially drop the
burst. Partial burst dropping is based on burst segmentation concept
that its implementation is constrained by some technical challenges,
besides the complexity added to the algorithms and protocols on both
edge and core nodes. In this paper, the burst segmentation concept is
investigated, and an implementation scheme is proposed and
evaluated. An appropriate dropping policy that effectively manages
the size of the segmented data bursts is presented. The dropping
policy is further supported by a new control packet format that
provides constant transmission overhead.

A new algorithm called Character-Comparison to Character-Access (CCCA) is developed to test the effect of both: 1) converting character-comparison and number-comparison into character-access and 2) the starting point of checking on the performance of the checking operation in string searching. An experiment is performed using both English text and DNA text with different sizes. The results are compared with five algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Cycle. With the CCCA algorithm, the results suggest that the evaluation criteria of the average number of total comparisons are improved up to 35%. Furthermore, the results suggest that the clock time required by the other algorithms is improved in range from 22.13% to 42.33% by the new CCCA algorithm.

Automatic reusability appraisal could be helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the software
from scratch. But the issue of how to identify reusable components
from existing systems has remained relatively unexplored. In this
paper, we have mentioned two-tier approach by studying the
structural attributes as well as usability or relevancy of the
component to a particular domain. Latent semantic analysis is used
for the feature vector representation of various software domains. It
exploits the fact that FeatureVector codes can be seen as documents
containing terms -the idenifiers present in the components- and so
text modeling methods that capture co-occurrence information in
low-dimensional spaces can be used. Further, we devised Neuro-
Fuzzy hybrid Inference System, which takes structural metric values
as input and calculates the reusability of the software component.
Decision tree algorithm is used to decide initial set of fuzzy rules for
the Neuro-fuzzy system. The results obtained are convincing enough
to propose the system for economical identification and retrieval of
reusable software components.

The mechanical properties of granular solids are
dependent on the flow of stresses from one particle to another
through inter-particle contact. Although some experimental methods
have been used to study the inter-particle contacts in the past,
preliminary work with these techniques indicated that they do not
have the necessary resolution to distinguish between those contacts
that transmit the load and those that do not, especially for systems
with a wide distribution of particle sizes. In this research, computer
simulations are used to study the nature and distribution of contacts
in a compact with wide particle size distribution, representative of
aggregate size distribution used in asphalt pavement construction.
The packing fraction, the mean number of contacts and the
distribution of contacts were studied for different scenarios. A
methodology to distinguish and compute the fraction of load-bearing
particles and the fraction of space-filling particles (particles that do
not transmit any force) is needed for further investigation.

Document image processing has become an
increasingly important technology in the automation of office
documentation tasks. During document scanning, skew is inevitably
introduced into the incoming document image. Since the algorithm
for layout analysis and character recognition are generally very
sensitive to the page skew. Hence, skew detection and correction in
document images are the critical steps before layout analysis. In this
paper, a novel skew detection method is presented for binary
document images. The method considered the some selected
characters of the text which may be subjected to thinning and Hough
transform to estimate skew angle accurately. Several experiments
have been conducted on various types of documents such as
documents containing English Documents, Journals, Text-Book,
Different Languages and Document with different fonts, Documents
with different resolutions, to reveal the robustness of the proposed
method. The experimental results revealed that the proposed method
is accurate compared to the results of well-known existing methods.

This paper investigates the problem of spreading
sequence and receiver code synchronization techniques for satellite
based CDMA communications systems. The performance of CDMA
system depends on the autocorrelation and cross-correlation
properties of the used spreading sequences. In this paper we propose
the uses of chaotic Lu system to generate binary sequences for
spreading codes in a direct sequence spread CDMA system. To
minimize multiple access interference (MAI) we propose the use of
genetic algorithm for optimum selection of chaotic spreading
sequences. To solve the problem of transmitter-receiver
synchronization, we use the passivity controls. The concept of
semipassivity is defined to find simple conditions which ensure
boundedness of the solutions of coupled Lu systems. Numerical
results are presented to show the effectiveness of the proposed
approach.

In this paper, we present the information life cycle, and analyze the importance of managing the corporate application portfolio across this life cycle. The approach presented here does not correspond just to the extension of the traditional information system development life cycle. This approach is based in the generic life cycle employed in other contexts like manufacturing or marketing. In this paper it is proposed a model of an information system life cycle, supported in the assumption that a system has a limited life. But, this limited life may be extended. This model is also applied in several cases; being reported here two examples of the framework application in a construction enterprise, and in a manufacturing enterprise.

In IETF RFC 2002, Mobile-IP was developed to
enable Laptobs to maintain Internet connectivity while moving
between subnets. However, the packet loss that comes from
switching subnets arises because network connectivity is lost while
the mobile host registers with the foreign agent and this encounters
large end-to-end packet delays. The criterion to initiate a simple and
fast full-duplex connection between the home agent and foreign
agent, to reduce the roaming duration, is a very important issue to be
considered by a work in this paper. State-transition Petri-Nets of the
modeling scenario-based CIA: communication inter-agents procedure
as an extension to the basic Mobile-IP registration process was
designed and manipulated to describe the system in discrete events.
The heuristic of configuration file during practical Setup session for
registration parameters, on Cisco platform Router-1760 using IOS
12.3 (15)T and TFTP server S/W is created. Finally, stand-alone
performance simulations from Simulink Matlab, within each subnet
and also between subnets, are illustrated for reporting better end-toend
packet delays. Results verified the effectiveness of our Mathcad
analytical manipulation and experimental implementation. It showed
lower values of end-to-end packet delay for Mobile-IP using CIA
procedure-based early registration. Furthermore, it reported packets
flow between subnets to improve losses between subnets.

In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more.

Thermally insulating ceramic coatings also known as
thermal barrier coatings (TBCs) have been essential technologies to
improve the performance and efficiency of advanced gas turbines in
service at extremely high temperatures. The damage mechanisms of
air-plasma sprayed YSZ thermal barrier coatings (TBC) with various
microstructures were studied by microscopic techniques after thermal
cycling. The typical degradation of plasma TBCs that occurs during
cyclic furnace testing of an YSZ and alumina coating on a Titanium
alloy are analyzed. During the present investigation the effects of
topcoat thickness, bond coat oxidation, thermal cycle lengths and test
temperature are investigated using thermal cycling. These results
were correlated with stresses measured by a spectroscopic technique
in order to understand specific damage mechanism. The failure
mechanism of former bond coats was found to involve fracture
initiation at the thermally grown oxide (TGO) interface and at the
TGO bond coat interface. The failure mechanism of the YZ was
found to involve combination of fracture along the interface between
TGO and bond coat.

Geographic Information System (GIS) is a computerbased
tool used extensively to solve various engineering problems
related to spatial data. In spite of growing popularity of GIS, its
complete potential to construction industry has not been realized. In
this paper, the summary of up-to-date work on spatial applications of
GIS technologies in construction industry is presented. GIS
technologies have the potential to solve space related problems of
construction industry involving complex visualization, integration of
information, route planning, E-commerce, cost estimation, etc. GISbased
methodology to handle time and space issues of construction
projects scheduling is developed and discussed in this paper.

Wheeled Mobile Robots (WMRs) are built with their
Wheels- drive machine, Motors. Depend on their desire design of
WMR, Technicians made used of DC Motors for motion control. In
this paper, the author would like to analyze how to choose DC motor
to be balance with their applications of especially for WMR.
Specification of DC Motor that can be used with desire WMR is to
be determined by using MATLAB Simulink model. Therefore, this
paper is mainly focus on software application of MATLAB and
Control Technology. As the driving system of DC motor, a
Peripheral Interface Controller (PIC) based control system is
designed including the assembly software technology and H-bridge
control circuit. This Driving system is used to drive two DC gear
motors which are used to control the motion of WMR. In this
analyzing process, the author mainly focus the drive system on
driving two DC gear motors that will control with Differential Drive
technique to the Wheeled Mobile Robot . For the design analysis of
Motor Driving System, PIC16F84A is used and five inputs of sensors
detected data are tested with five ON/OFF switches. The outputs of
PIC are the commands to drive two DC gear motors, inputs of Hbridge
circuit .In this paper, Control techniques of PIC
microcontroller and H-bridge circuit, Mechanism assignments of
WMR are combined and analyzed by mainly focusing with the
“Modeling and Simulink of DC Motor using MATLAB".

This paper presents the experimental results of
leakage current waveforms which appears on porcelain insulator
surface due to existence of artificial pollutants. The tests have been
done using the chemical compounds of NaCl, Na2SiO3, H2SO4, CaO,
Na2SO4, KCl, Al2SO4, MgSO4, FeCl3, and TiO2. The insulator
surface was coated with those compounds and dried. Then, it was
tested in the chamber where the high voltage was applied. Using
correspondence analysis, the result indicated that the fundamental
harmonic of leakage current was very close to the applied voltage
and third harmonic leakage current was close to the yielded leakage
current amplitude. The first harmonic power was correlated to first
harmonic amplitude of leakage current, and third harmonic power
was close to third harmonic one. The chemical compounds of H2SO4
and Na2SiO3 affected to the power factor of around 70%. Both are the
most conductive, due to the power factor drastically increase among
the chemical compounds.

Female breast cancer is the second in frequency after cervical cancer. Surgery is the most common treatment for breast cancer, followed by chemotherapy as a treatment of choice. Although effective, it causes serious side effects. Controlled-release drug delivery is an alternative method to improve the efficacy and safety of the treatment. It can release the dosage of drug between the minimum effect concentration (MEC) and minimum toxic concentration (MTC) within tumor tissue and reduce the damage of normal tissue and the side effect. Because an in vivo experiment of this system can be time-consuming and labor-intensive, a mathematical model is desired to study the effects of important parameters before the experiments are performed. Here, we describe a 3D mathematical model to predict the release of doxorubicin from pluronic gel to treat human breast cancer. This model can, ultimately, be used to effectively design the in vivo experiments.

In this paper, we propose a method to extract the road
signs. Firstly, the grabbed image is converted into the HSV color space
to detect the road signs. Secondly, the morphological operations are
used to reduce noise. Finally, extract the road sign using the geometric
property. The feature extraction of road sign is done by using the color
information. The proposed method has been tested for the real
situations. From the experimental results, it is seen that the proposed
method can extract the road sign features effectively.

Turbulence modeling of large-scale flow over a vegetated surface is complex. Such problems involve large scale computational domains, while the characteristics of flow near the surface are also involved. In modeling large scale flow, surface roughness including vegetation is generally taken into account by mean of roughness parameters in the modified law of the wall. However, the turbulence structure within the canopy region cannot be captured with this method, another method which applies source/sink terms to model plant drag can be used. These models have been developed and tested intensively but with a simple surface geometry. This paper aims to compare the use of roughness parameter, and additional source/sink terms in modeling the effect of plant drag on wind flow over a complex vegetated surface. The RNG k-ε turbulence model with the non-equilibrium wall function was tested with both cases. In addition, the k-ω turbulence model, which is claimed to be computationally stable, was also investigated with the source/sink terms. All numerical results were compared to the experimental results obtained at the study site Mason Bay, Stewart Island, New Zealand. In the near-surface region, it is found that the results obtained by using the source/sink term are more accurate than those using roughness parameters. The k-ω turbulence model with source/sink term is more appropriate as it is more accurate and more computationally stable than the RNG k-ε turbulence model. At higher region, there is no significant difference amongst the results obtained from all simulations.

The two-dimensional gel electrophoresis method
(2-DE) is widely used in Proteomics to separate thousands of proteins
in a sample. By comparing the protein expression levels of proteins in
a normal sample with those in a diseased one, it is possible to identify
a meaningful set of marker proteins for the targeted disease. The major
shortcomings of this approach involve inherent noises and irregular
geometric distortions of spots observed in 2-DE images. Various
experimental conditions can be the major causes of these problems. In
the protein analysis of samples, these problems eventually lead to
incorrect conclusions. In order to minimize the influence of these
problems, this paper proposes a partition based pair extension method
that performs spot-matching on a set of gel images multiple times and
segregates more reliable mapping results which can improve the
accuracy of gel image analysis. The improved accuracy of the
proposed method is analyzed through various experiments on real
2-DE images of human liver tissues.

The paper discusses the results obtained to predict
reinforcement in singly reinforced beam using Neural Net (NN),
Support Vector Machines (SVM-s) and Tree Based Models. Major
advantage of SVM-s over NN is of minimizing a bound on the
generalization error of model rather than minimizing a bound on
mean square error over the data set as done in NN. Tree Based
approach divides the problem into a small number of sub problems to
reach at a conclusion. Number of data was created for different
parameters of beam to calculate the reinforcement using limit state
method for creation of models and validation. The results from this
study suggest a remarkably good performance of tree based and
SVM-s models. Further, this study found that these two techniques
work well and even better than Neural Network methods. A
comparison of predicted values with actual values suggests a very
good correlation coefficient with all four techniques.

This paper discusses the Urdu script characteristics,
Urdu Nastaleeq and a simple but a novel and robust technique to
recognize the printed Urdu script without a lexicon. Urdu being a
family of Arabic script is cursive and complex script in its nature, the
main complexity of Urdu compound/connected text is not its
connections but the forms/shapes the characters change when it is
placed at initial, middle or at the end of a word. The characters
recognition technique presented here is using the inherited
complexity of Urdu script to solve the problem. A word is scanned
and analyzed for the level of its complexity, the point where the level
of complexity changes is marked for a character, segmented and
feeded to Neural Networks. A prototype of the system has been
tested on Urdu text and currently achieves 93.4% accuracy on the
average.

Early Intervention Program (EIP) is required to
improve the overall development of children with Trisomy 21 (Down
syndrome). In order to help trainer and parent in the implementation
of EIP, a support system has been developed. The support system is
able to screen data automatically, store and analyze data, generate
individual EIP (curriculum) with optimal training duration and to
generate training automatically. The system consists of hardware and
software where the software has been implemented using Java
language and Linux Fedora. The software has been tested to ensure the
functionality and reliability. The prototype has been also tested in
Down syndrome centers. Test result shows that the system is reliable
to be used for generation of an individual curriculum which includes
the training program to improve the motor, cognitive, and combination
abilities of Down syndrome children under 6 years.

This paper considers the Zlin region in terms of the
demographic conditions of the region - in particular the residential
structure and the educational background of the inhabitants. The
paper also considers migration of the population within the Zlin
region. Migration is of importance in terms of conservation of the
working potential of the region.

This study investigated morphology of the Spanner Barb (Puntius lateristriga Valenciennes, 1842) and water quality at Thepchana waterfall. This study was conducted at Thepchana Waterfall, Khao Nan National Park from March to May 2007. There were 40 Spanner Barb collected with 20 males and 20 females. Males had an average of 5.57 cm in standard length, 6.62 cm in total length and 5.18 g in total body weight. Females had an average of 7.25 cm in standard length, 8.24 cm in total length and 10.96 g in total body weight. The length (L) – weight (W) relationships for combining sexes, males and females were LogW = -2.137 + 3.355logL, log W = -0.068 + 3.297logL, and log W = -2.068 + 3.297logL, respectively. The Spanner Barb were smaller size fish with a compressed form; terminal mouth; villiform teeth; ctenoid scale; concave tail; general body color yellowish olive, with slight reddish tint to fins; vertical band beginning below dorsal and horizontal stripe from base of tail almost to vertical band. They also had a vertical band midway between the eye and first vertical band. There was a black spot above anal fin. The bladder looked like J-shape. Inside of the bladder was found small insects and insect lava. The body length and the bowels length was 1:1 ratio. The water temperature ranged from 25.00 – 27.00 °C which was appropriate for their habitat characteristics. Acid - alkalinity ranged from 6.65 – 6.90 mg/l. Dissolved oxygen ranged from 4.55 – 4.70 mg/l. Water hardness ranged from 31.00 – 48.00 mg/l. The amount of ammonia was about 0.25 mg/l.

This study aimed at developing a forecasting model on the number of Dengue Haemorrhagic Fever (DHF) incidence in Northern Thailand using time series analysis. We developed Seasonal Autoregressive Integrated Moving Average (SARIMA) models on the data collected between 2003-2006 and then validated the models using the data collected between January-September 2007. The results showed that the regressive forecast curves were consistent with the pattern of actual values. The most suitable model was the SARIMA(2,0,1)(0,2,0)12 model with a Akaike Information Criterion (AIC) of 12.2931 and a Mean Absolute Percent Error (MAPE) of 8.91713. The SARIMA(2,0,1)(0,2,0)12 model fitting was adequate for the data with the Portmanteau statistic Q20 = 8.98644 ( x20,95= 27.5871, P>0.05). This indicated that there was no significant autocorrelation between residuals at different lag times in the SARIMA(2,0,1)(0,2,0)12 model.

This study investigated the pattern and seasonal index of influenza cases in Thailand. Our results showed that southern Thailand had the highest influenza incidence among the four regions of Thailand (i.e. north, northeast, central and southern Thailand). The influenza pattern in southern Thailand was similar to that of northeastern Thailand. Seasonal index values of influenza cases in Thailand were higher in the hot season than in the wet season. Influenza cases started to increase at the beginning of the hot season (April), reached a maximum in August, rapidly declined in the middle of the wet season and reached the lowest value in December. Seasonal index values for northern Thailand differed from other regions of Thailand.

This work investigated the phenology of Parah tree
(Elateriospermum tapos) using the General Purpose Atmosphere
Plant Soil Simulator (GAPS model) to determine the amount of Plant
Available Water (PAW) in the soil. We found the correlation
between PAW and the timing of budburst and flower burst at Khao
Nan National Park, Nakhon Si Thammarat, Thailand. PAW from the
GAPS model can be used as an indicator of soil water stress. The low
amount of PAW may lead to leaf shedding in Parah trees.

This study aims at using multi-source data to monitor
coral biodiversity and coral bleaching. We used coral reef at Racha
Islands, Phuket as a study area. There were three sources of data:
coral diversity, sensor based data and satellite data.

A better understanding of cloud forest characteristic in a tropical montane cloud forest at Khao Nan, Nakhon Si Thammarat on climatic, vegetation, soil and hydrology were studied during 18-21 April 2007. The results showed that as air temperature at Sanyen cloud forest increased, the percent relative humidity decreased. The amount of solar radiation at Sanyen cloud forest had a positive association with the amount of solar radiation at Parah forest. The amount of solar radiation at Sanyen cloud forest was very low with a range of 0-19 W/m2. On the other hand, the amount of solar radiation at Parah forest was high with a range of 0-1000 W/m2. There was no difference between leaf width, leaf length, leaf thickness and leaf area with increasing in elevations. As the elevations increased, bush height and tree height decreased. There was no association between bush width and bush ratio with elevation. As the elevations increased, the percent epiphyte cover and the percent soil moisture increased but water temperature, conductivity, and dissolved oxygen decreased. The percent soil moistures and organic contents were higher at elevations above 900 m than elevations below.

This study aimed at developing visualization tools for integrating CloudSat images and Water Vapor Satellite images. KML was used for integrating data from CloudSat Satellite and GMS-6 Water Vapor Satellite. CloudSat 2D images were transformed into 3D polygons in order to achieve 3D images. Before overlaying the images on Google Earth, GMS-6 water vapor satellite images had to be rescaled into linear images. Web service was developed using webMathematica. Shoreline from GMS-6 images was compared with shoreline from LandSat images on Google Earth for evaluation. The results showed that shoreline from GMS-6 images was highly matched with the shoreline in LandSat images from Google Earth. For CloudSat images, the visualizations were compared with GMS-6 images on Google Earth. The results showed that CloudSat and GMS-6 images were highly correlated.

To develop a process of extracting pixel values over the using of satellite remote sensing image data in Thailand. It is a very important and effective method of forecasting rainfall. This paper presents an approach for forecasting a possible rainfall area based on pixel values from remote sensing satellite images. First, a method uses an automatic extraction process of the pixel value data from the satellite image sequence. Then, a data process is designed to enable the inference of correlations between pixel value and possible rainfall occurrences. The result, when we have a high averaged pixel value of daily water vapor data, we will also have a high amount of daily rainfall. This suggests that the amount of averaged pixel values can be used as an indicator of raining events. There are some positive associations between pixel values of daily water vapor images and the amount of daily rainfall at each rain-gauge station throughout Thailand. The proposed approach was proven to be a helpful manual for rainfall forecasting from meteorologists by which using automated analyzing and interpreting process of meteorological remote sensing data.

Nowadays, the importance of energy saving is clearance to everyone. By attention to increasing price of fuels and also the problems of environment pollutions, there are the most efforts for using fuels littler and more optimum in everywhere. This essay studies optimizing of gas consumption in gas-burner space heaters. In oven of each gas-burner space heaters there is two snags to prevent the hot air (the result of combustion of natural gas) to go out of oven of the gas-burner space heaters directly without delivering its heat to the space of favorite environment like a room. These snags cause a excess circulating that helps hot air deliver its heat to the space of favorite environment. It means the exhaust air temperature will be decreased then when there are no snags. This is the aim of this essay to use maximum potential energy of the natural gas to make heat. In this study, by the help of a finite volume software (FLUENT) consumption of the gas-burner space heaters is simulated and optimized. At the end of this writing, by comparing the results of software and experimental results, it will be proved the authenticity of this method.

This paper presents a Reliability-Based Topology
Optimization (RBTO) based on Evolutionary Structural Optimization
(ESO). An actual design involves uncertain conditions such as
material property, operational load and dimensional variation.
Deterministic Topology Optimization (DTO) is obtained without
considering of the uncertainties related to the uncertainty parameters.
However, RBTO involves evaluation of probabilistic constraints,
which can be done in two different ways, the reliability index
approach (RIA) and the performance measure approach (PMA). Limit
state function is approximated using Monte Carlo Simulation and
Central Composite Design for reliability analysis. ESO, one of the
topology optimization techniques, is adopted for topology
optimization. Numerical examples are presented to compare the DTO
with RBTO.

In this paper the authors present the framework of a
system for assisting users through counseling on personal health, the
Personal Health Assistance Service Expert System (PHASES).
Personal health assistance systems need Personal Health Records
(PHR), which support wellness activities, improve the understanding
of personal health issues, enable access to data from providers of
health services, strengthen health promotion, and in the end improve
the health of the population. This is especially important in societies
where the health costs increase at a higher rate than the overall
economy. The most important elements of a healthy lifestyle are
related to food (such as balanced nutrition and diets), activities for
body fitness (such as walking, sports, fitness programs), and other
medical treatments (such as massage, prescriptions of drugs). The
PHASES framework uses an ontology of food, which includes
nutritional facts, an expert system keeping track of personal health
data that are matched with medical treatments, and a comprehensive
data transfer between patients and the system.

The most influential programming paradigm today
is object oriented (OO) programming and it is widely used in
education and industry. Recognizing the importance of equipping
students with OO knowledge and skills, it is not surprising that most
Computer Science degree programs offer OO-related courses. How
do we assess whether the students have acquired the right objectoriented
skills after they have completed their OO courses? What are
object oriented skills? Currently none of the current assessment
techniques would be able to provide this answer. Traditional forms of
OO programming assessment provide a ways for assigning numerical
scores to determine letter grades. But this rarely reveals information
about how students actually understand OO concept. It appears
reasonable that a better understanding of how to define and assess
OO skills is needed by developing a criterion referenced model. It is
even critical in the context of Malaysia where there is currently a
growing concern over the level of competency of Malaysian IT
graduates in object oriented programming. This paper discussed the
approach used to develop the criterion-referenced assessment model.
The model can serve as a guideline when conducting OO
programming assessment as mentioned. The proposed model is
derived by using Goal Questions Metrics methodology, which helps
formulate the metrics of interest. It concluded with a few suggestions
for further study.

Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.

The group mutual exclusion (GME) problem is an
interesting generalization of the mutual exclusion problem. Several
solutions of the GME problem have been proposed for message
passing distributed systems. However, none of these solutions is
suitable for real time distributed systems. In this paper, we propose a
token-based distributed algorithms for the GME problem in soft real
time distributed systems. The algorithm uses the concepts of priority
queue, dynamic request set and the process state. The algorithm uses
first come first serve approach in selecting the next session type
between the same priority levels and satisfies the concurrent
occupancy property. The algorithm allows all n processors to be
inside their CS provided they request for the same session. The
performance analysis and correctness proof of the algorithm has also
been included in the paper.

To realize the vision of ubiquitous computing, it is
important to develop a context-aware infrastructure which can help
ubiquitous agents, services, and devices become aware of their
contexts because such computational entities need to adapt themselves
to changing situations. A context-aware infrastructure manages the
context model representing contextual information and provides
appropriate information. In this paper, we introduce Context-Aware
Middleware for URC System (hereafter CAMUS) as a context-aware
infrastructure for a network-based intelligent robot system and discuss
the ontology-based context modeling and reasoning approach which is
used in that infrastructure.

Nowadays, a passenger car suspension must has high
performance criteria with light weight, low cost, and low energy
consumption. Pilot controlled proportional valve is designed and
analyzed to get small pressure change rate after blow-off, and to get a
fast response of the damper, a reverse damping mechanism is adapted.
The reverse continuous variable damper is designed as a HS-SH
damper which offers good body control with reduced transferred input
force from the tire, compared with any other type of suspension
system. The damper structure is designed, so that rebound and
compression damping forces can be tuned independently, of which the
variable valve is placed externally. The rate of pressure change with
respect to the flow rate after blow-off becomes smooth when the fixed
orifice size increases, which means that the blow-off slope is
controllable using the fixed orifice size. Damping forces are measured
with the change of the solenoid current at the different piston
velocities to confirm the maximum hysteresis of 20 N, linearity, and
variance of damping force. The damping force variance is wide and
continuous, and is controlled by the spool opening, of which scheme is
usually adapted in proportional valves. The reverse continuous
variable damper developed in this study is expected to be utilized in
the semi-active suspension systems in passenger cars after its
performance and simplicity of the design is confirmed through a real
car test.

For broadband wireless mobile communication
systems the orthogonal frequency division multiplexing (OFDM) is a
suitable modulation scheme. The frequency offset between
transmitter and receiver local oscillator is main drawback of OFDM
systems, which causes intercarrier interference (ICI) in the
subcarriers of the OFDM system. This ICI degrades the bit error rate
(BER) performance of the system. In this paper an improved self-ICI
cancellation scheme is proposed to improve the system performance.
The proposed scheme is based on discrete Fourier transform-inverse
discrete Fourier transform (DFT-IDFT). The simulation results show
that there is satisfactory improvement in the bit error rate (BER)
performance of the present scheme.

A four-lobe pressure dam bearing which is
produced by cutting two pressure dams on the upper two lobes and
two relief-tracks on the lower two lobes of an ordinary four-lobe
bearing is found to be more stable than a conventional four-lobe
bearing. In this paper a four-lobe pressure dam bearing supporting
rigid and flexible rotors is analytically investigated to determine its
performance when L/D ratio is varied in the range 0.75 to 1.5. The
static and dynamic characteristics are studied at various L/D ratios.
The results show that the stability of a four-lobe pressure dam
bearing increases with decrease in L/D ratios both for rigid as well as
flexible rotors.

The most common forensic activity is searching a hard
disk for string of data. Nowadays, investigators and analysts are
increasingly experiencing large, even terabyte sized data sets when
conducting digital investigations. Therefore consecutive searching can
take weeks to complete successfully. There are two primary search
methods: index-based search and bitwise search. Index-based
searching is very fast after the initial indexing but initial indexing
takes a long time. In this paper, we discuss a high speed bitwise search
model for large-scale digital forensic investigations. We used pattern
matching board, which is generally used for network security, to
search for string and complex regular expressions. Our results indicate
that in many cases, the use of pattern matching board can substantially
increase the performance of digital forensic search tools.

The search for factors that influence user behavior has remained an important theme for both the academic and practitioner Information Systems Communities. In this paper we examine relevant user behaviors in the phase after adoption and investigate two factors that are expected to influence such behaviors, namely User Involvement (UI) and Personal Innovativeness in IT (PIIT). We conduct a field study to examine how these factors influence postadoption behavior and how they are interrelated. Building on theoretical premises and prior empirical findings, we propose and test two alternative models of the relationship between these factors. Our results reveal that the best explanation of post-adoption behavior is provided by the model where UI and PIIT independently influence post-adoption behavior. Our findings have important implications for research and practice. To that end, we offer directions for future research.

Air bending is one of the important metal forming
processes, because of its simplicity and large field application.
Accuracy of analytical and empirical models reported for the analysis
of bending processes is governed by simplifying assumption and do
not consider the effect of dynamic parameters. Number of researches
is reported on the finite element analysis (FEA) of V-bending, Ubending,
and air V-bending processes. FEA of bending is found to be
very sensitive to many physical and numerical parameters. FE
models must be computationally efficient for practical use. Reported
work shows the 3D FEA of air bending process using Hyperform LSDYNA
and its comparison with, published 3D FEA results of air
bending in Ansys LS-DYNA and experimental results. Observing the
planer symmetry and based on the assumption of plane strain
condition, air bending problem was modeled in 2D with symmetric
boundary condition in width. Stress-strain results of 2D FEA were
compared with 3D FEA results and experiments. Simplification of
air bending problem from 3D to 2D resulted into tremendous
reduction in the solution time with only marginal effect on stressstrain
results. FE model simplification by studying the problem
symmetry is more efficient and practical approach for solution of
more complex large dimensions slow forming processes.

Bendability is constrained by maximum top roller
load imparting capacity of the machine. Maximum load is
encountered during the edge pre-bending stage of roller bending.
Capacity of 3-roller plate bending machine is specified by
maximum thickness and minimum shell diameter combinations that
can be pre-bend for given plate material of maximum width.
Commercially available plate width or width of the plate that can be
accommodated on machine decides the maximum rolling width.
Original equipment manufacturers (OEM) provide the machine
capacity chart based on reference material considering perfectly
plastic material model. Reported work shows the bendability analysis
of heavy duty 3-roller plate bending machine. The input variables for
the industry are plate thickness, shell diameter and material property
parameters, as it is fixed by the design. Analytical models of
equivalent thickness, equivalent width and maximum width based on
power law material model were derived to study the bendability.
Equation of maximum width provides bendability for designed
configuration i.e. material property, shell diameter and thickness
combinations within the machine limitations. Equivalent thicknesses
based on perfectly plastic and power law material model were
compared for four different materials grades of C-Mn steel in order
to predict the bend-ability. Effect of top roller offset on the
bendability at maximum top roller load imparting capacity is
reported.

Text Mining is an important step of Knowledge
Discovery process. It is used to extract hidden information from notstructured
o semi-structured data. This aspect is fundamental because
much of the Web information is semi-structured due to the nested
structure of HTML code, much of the Web information is linked,
much of the Web information is redundant. Web Text Mining helps
whole knowledge mining process to mining, extraction and
integration of useful data, information and knowledge from Web
page contents.
In this paper, we present a Web Text Mining process able to
discover knowledge in a distributed and heterogeneous multiorganization
environment. The Web Text Mining process is based on
flexible architecture and is implemented by four steps able to
examine web content and to extract useful hidden information
through mining techniques. Our Web Text Mining prototype starts
from the recovery of Web job offers in which, through a Text Mining
process, useful information for fast classification of the same are
drawn out, these information are, essentially, job offer place and
skills.

In this study, it is investigated the stability boundary of
Functionally Graded (FG) panel under the heats and supersonic
airflows. Material properties are assumed to be temperature
dependent, and a simple power law distribution is taken. First-order
shear deformation theory (FSDT) of plate is applied to model the
panel, and the von-Karman strain- displacement relations are
adopted to consider the geometric nonlinearity due to large
deformation. Further, the first-order piston theory is used to model the
supersonic aerodynamic load acting on a panel and Rayleigh damping
coefficient is used to present the structural damping. In order to find a
critical value of the speed, linear flutter analysis of FG panels is
performed. Numerical results are compared with the previous works,
and present results for the temperature dependent material are
discussed in detail for stability boundary of the panel with various
volume fractions, and aerodynamic pressures.

In this paper a tuning fork type structure of Ultra
Wideband (UWB) antenna is proposed. The antenna offers excellent
performance for UWB system, ranging from 3.7 GHz to 13.8 GHz.
The antenna exhibits a 10 dB return loss bandwidth over the entire
frequency band. The rectangular patch antenna is designed on FR4
substrate and fed with 50 ohms microstrip line by optimizing the
width of partial ground, the width and position of the feedline to
operate in UWB. The rectangular patch is then modified to tuning
fork structure by maintaining UWB frequency range.

In this paper, we employ the approach of linear
programming to propose a new interactive broadcast method. In our
method, a film S is divided into n equal parts and broadcast via k
channels. The user simultaneously downloads these segments from k
channels into the user-s set-top-box (STB) and plays them in order.
Our method assumes that the initial p segments will not have
fast-forwarding capabilities. Every time the user wants to initiate d
times fast-forwarding, according to our broadcasting strategy, the
necessary segments already saved in the user-s STB or are just
download on time for playing. The proposed broadcasting strategy not
only allows the user to pause and rewind, but also to fast-forward.

People have the habitual pitch level which is used when people say something generally. However this pitch should be changed irregularly in the presence of noise. So it is useful to estimate SNR of speech signal by pitch. In this paper, we obtain the energy of input speech signal and then we detect a stationary region on voiced speech. And we get the pitch period by NAMDF for the stationary region that is not varied pitch rapidly. After getting pitch, each frame is divided by pitch period and the likelihood of closed pitch is estimated. In this paper, we proposed new parameter, NLF, to estimate the SNR of received speech signal. The NLF is derived from the correlation of near pitch periods. The NLF is obtained for each stationary region in voiced speech. Finally we confirmed good performance of the estimation of the SNR of received input speech in the presence of noise.

As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.

Electronic voting (E-voting) using an internet has been
recently performed in some nations and regions. There is no spatial
restriction which a voter directly has to visit the polling place, but an
e-voting using an internet has to go together the computer in which the
internet connection is possible. Also, this voting requires an access
code for the e-voting through the beforehand report of a voter. To
minimize these disadvantages, we propose a method in which a voter,
who has the wireless certificate issued in advance, uses its own cellular
phone for an e-voting without the special registration for a vote. Our
proposal allows a voter to cast his vote in a simple and convenient way
without the limit of time and location, thereby increasing the voting
rate, and also ensuring confidentiality and anonymity.

Cell phone forensics to acquire and analyze data in the
cellular phone is nowadays being used in a national investigation
organization and a private company. In order to collect cellular phone
flash memory data, we have two methods. Firstly, it is a logical
method which acquires files and directories from the file system of the
cell phone flash memory. Secondly, we can get all data from bit-by-bit
copy of entire physical memory using a low level access method. In
this paper, we describe a forensic tool to acquire cell phone flash
memory data using a logical level approach. By our tool, we can get
EFS file system and peek memory data with an arbitrary region from
Korea CDMA cell phone.

Eigenvector methods are gaining increasing acceptance in the area of spectrum estimation. This paper presents a successful attempt at testing and evaluating the performance of two of the most popular types of subspace techniques in determining the parameters of multiexponential signals with real decay constants buried in noise. In particular, MUSIC (Multiple Signal Classification) and minimum-norm techniques are examined. It is shown that these methods perform almost equally well on multiexponential signals with MUSIC displaying better defined peaks.

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Traditional wind tunnel models are meticulously machined from metal in a process that can take several months. While very precise, the manufacturing process is too slow to assess a new design's feasibility quickly. Rapid prototyping technology makes this concurrent study of air vehicle concepts via computer simulation and in the wind tunnel possible. This paper described the Affects layer thickness models product with rapid prototyping on Aerodynamic Coefficients for Constructed wind tunnel testing models. Three models were evaluated. The first model was a 0.05mm layer thickness and Horizontal plane 0.1μm (Ra) second model was a 0.125mm layer thickness and Horizontal plane 0.22μm (Ra) third model was a 0.15mm layer thickness and Horizontal plane 4.6μm (Ra). These models were fabricated from somos 18420 by a stereolithography (SLA). A wing-body-tail configuration was chosen for the actual study. Testing covered the Mach range of Mach 0.3 to Mach 0.9 at an angle-of-attack range of -2° to +12° at zero sideslip. Coefficients of normal force, axial force, pitching moment, and lift over drag are shown at each of these Mach numbers. Results from this study show that layer thickness does have an effect on the aerodynamic characteristics in general; the data differ between the three models by fewer than 5%. The layer thickness does have more effect on the aerodynamic characteristics when Mach number is decreased and had most effect on the aerodynamic characteristics of axial force and its derivative coefficients.

Traditionally, wind tunnel models are made of metal
and are very expensive. In these years, everyone is looking for ways
to do more with less. Under the right test conditions, a rapid
prototype part could be tested in a wind tunnel. Using rapid prototype
manufacturing techniques and materials in this way significantly
reduces time and cost of production of wind tunnel models. This
study was done of fused deposition modeling (FDM) and their ability
to make components for wind tunnel models in a timely and cost
effective manner. This paper discusses the application of wind tunnel
model configuration constructed using FDM for transonic wind
tunnel testing. A study was undertaken comparing a rapid
prototyping model constructed of FDM Technologies using
polycarbonate to that of a standard machined steel model. Testing
covered the Mach range of Mach 0.3 to Mach 0.75 at an angle-ofattack
range of - 2° to +12°. Results from this study show relatively
good agreement between the two models and rapid prototyping
Method reduces time and cost of production of wind tunnel models.
It can be concluded from this study that wind tunnel models
constructed using rapid prototyping method and materials can be
used in wind tunnel testing for initial baseline aerodynamic database
development.