Data caching is an important technique in mobile computing environments for improving data
availability and access latencies particularly because these computing environments are characterized
by narrow bandwidth wireless links and frequent disconnections. Cache replacement policy plays a vital
role to improve the performance in a cached mobile environment, since the amount of data stored in a
client cache is small. In this paper we reviewed some of the well known cache replacement policies
proposed for mobile data caches. We made a comparison between these policies after classifying them
based on the criteria used for evicting documents. In addition, this paper suggests some alternative
techniques for cache replacement

This paper proposes a content based image retrieval (CBIR) system using the local colour and texture features of selected image sub-blocks and global colour and shape features of the image. The image sub-blocks are roughly identified by segmenting the image into partitions of different configuration, finding the edge density in each partition using edge thresholding, morphological dilation and finding the corner density in each partition. The colour and texture features of the identified regions are computed from the histograms of the quantized HSV colour space and Gray Level Co- occurrence Matrix (GLCM) respectively. A combined colour and texture feature vector is computed for each region. The shape features are computed from the Edge Histogram Descriptor (EHD). Euclidean distance measure is used for computing the distance between the features of the query and target image. Experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods

In Wireless Sensor Networks (WSN), neglecting the
effects of varying channel quality can lead to an
unnecessary wastage of precious battery resources and in
turn can result in the rapid depletion of sensor energy and
the partitioning of the network. Fairness is a critical issue
when accessing a shared wireless channel and fair
scheduling must be employed to provide the proper flow
of information in a WSN. In this paper, we develop a
channel adaptive MAC protocol with a traffic-aware
dynamic power management algorithm for efficient packet
scheduling and queuing in a sensor network, with time
varying characteristics of the wireless channel also taken
into consideration. The proposed protocol calculates a
combined weight value based on the channel state and link
quality. Then transmission is allowed only for those nodes
with weights greater than a minimum quality threshold
and nodes attempting to access the wireless medium with a
low weight will be allowed to transmit only when their
weight becomes high. This results in many poor quality
nodes being deprived of transmission for a considerable
amount of time. To avoid the buffer overflow and to
achieve fairness for the poor quality nodes, we design a
Load prediction algorithm. We also design a traffic aware
dynamic power management scheme to minimize the
energy consumption by continuously turning off the radio
interface of all the unnecessary nodes that are not included
in the routing path. By Simulation results, we show that
our proposed protocol achieves a higher throughput and
fairness besides reducing the delay

Description:

IJCSNS International Journal of Computer Science and Network Security, VOL.10 No.7, July 2010

Suffix separation plays a vital role in improving the quality of
training in the Statistical Machine Translation from English into Malayalam.
The morphological richness and the agglutinative nature of Malayalam make it
necessary to retrieve the root word from its inflected form in the training
process. The suffix separation process accomplishes this task by scrutinizing the
Malayalam words and by applying sandhi rules. In this paper, various
handcrafted rules designed for the suffix separation process in the English
Malayalam SMT are presented. A classification of these rules is done based on
the Malayalam syllable preceding the suffix in the inflected form of the word
(check_letter). The suffixes beginning with the vowel sounds like ആല, ഉെെ, ഇല
etc are mainly considered in this process. By examining the check_letter in a
word, the suffix separation rules can be directly applied to extract the root
words. The quick look up table provided in this paper can be used as a guideline
in implementing suffix separation in Malayalam language

Code clones are portions of source code which are
similar to the original program code. The presence of code clones
is considered as a bad feature of software as the maintenance of
software becomes difficult due to the presence of code clones.
Methods for code clone detection have gained immense
significance in the last few years as they play a significant role in
engineering applications such as analysis of program code,
program understanding, plagiarism detection, error detection,
code compaction and many more similar tasks. Despite of all
these facts, several features of code clones if properly utilized can
make software development process easier. In this work, we have
pointed out such a feature of code clones which highlight the
relevance of code clones in test sequence identification. Here
program slicing is used in code clone detection. In addition, a
classification of code clones is presented and the benefit of using
program slicing in code clone detection is also mentioned in this
work.

Description:

Information and Communication Technologies (WICT), 2011 World Congress on

Speech processing and consequent recognition are important areas of Digital Signal Processing
since speech allows people to communicate more natu-rally and efficiently. In this work, a
speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing
speech, features are to be ex-tracted from speech and hence feature extraction method plays an
important role in speech recognition. Here, front end processing for extracting the features is
per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and
Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose.
After classification using Naive Bayes classifier, DWT produced a recognition accuracy of
83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new
feature extraction method which produces improvements in the recognition accuracy. So, a new
method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes
the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated
and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.

Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme

Speech is a natural mode of communication for people and speech recognition is an intensive area of research due to its versatile applications. This paper presents a comparative study of various feature extraction methods based on wavelets for recognizing isolated spoken words. Isolated words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. This work includes two speech recognition methods. First one is a hybrid approach with Discrete Wavelet Transforms and Artificial Neural Networks and the second method uses a combination of Wavelet Packet Decomposition and Artificial Neural Networks. Features are extracted by using Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Training, testing and pattern recognition are performed using Artificial Neural Networks (ANN). The proposed method is implemented for 50 speakers uttering 20 isolated words each. The experimental results obtained show the efficiency of these techniques in recognizing speech

This paper compares statistical technique of
paraphrase identification to semantic technique of
paraphrase identification. The statistical techniques
used for comparison are word set and word-order
based methods where as the semantic technique used is
the WordNet similarity matrix method described by
Stevenson and Fernando in [3].

This paper proposes a region based image
retrieval system using the local colour and texture features
of image sub regions. The regions of interest (ROI) are
roughly identified by segmenting the image into fixed
partitions, finding the edge map and applying
morphological dilation. The colour and texture features of
the ROIs are computed from the histograms of the
quantized HSV colour space and Gray Level co- occurrence
matrix (GLCM) respectively. Each ROI of the query image
is compared with same number of ROIs of the target image
that are arranged in the descending order of white pixel
density in the regions, using Euclidean distance measure for
similarity computation. Preliminary experimental results
show that the proposed method provides better retrieving
result than retrieval using some of the existing methods.

Files in this item: 1

This paper presents a Robust Content Based Video
Retrieval (CBVR) system. This system retrieves similar videos
based on a local feature descriptor called SURF (Speeded Up
Robust Feature). The higher dimensionality of SURF like
feature descriptors causes huge storage consumption during
indexing of video information. To achieve a dimensionality
reduction on the SURF feature descriptor, this system employs
a stochastic dimensionality reduction method and thus
provides a model data for the videos. On retrieval, the model
data of the test clip is classified to its similar videos using a
minimum distance classifier. The performance of this system is
evaluated using two different minimum distance classifiers
during the retrieval stage. The experimental analyses
performed on the system shows that the system has a retrieval
performance of 78%. This system also analyses the
performance efficiency of the low dimensional SURF
descriptor.

Description:

2013 Third International Conference on Advances in Computing and Communications

Due to the advancement in mobile devices and wireless networks mobile cloud computing, which
combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of
mobile devices and wireless network makes the implementation of mobile cloud computing more complicated
than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key
issues in mobile cloud computing is the end to end delay in servicing a request. Data caching is one of the
techniques widely used in wired and wireless networks to improve data access efficiency. In this paper we
explore the possibility of a cooperative caching approach to enhance data access efficiency in mobile cloud
computing. The proposed approach is based on cloudlets, one of the architecture designed for mobile cloud
computing.

Files in this item: 1

Data caching can remarkably improve the
efficiency of information access in a wireless ad hoc
network by reducing the access latency and bandwidth
usage. The cache placement problem minimizes total data
access cost in ad hoc networks with multiple data items.
The ad hoc networks are multi hop networks without a
central base station and are resource constrained in terms
of channel bandwidth and battery power. By data caching
the communication cost can be reduced in terms of
bandwidth as well as battery energy. As the network node
has limited memory the problem of cache placement is a
vital issue. This paper attempts to study the existing
cooperative caching techniques and their suitability in
mobile ad hoc networks.

In this paper a method of copy detection in short Malayalam text passages is proposed. Given two passages one as the source text and another as the copied text it is determined whether the second passage is plagiarized version of the source text. An algorithm for plagiarism detection using the n-gram model for word retrieval is developed and found tri-grams as the best model for comparing the Malayalam text. Based on the probability and the resemblance measures calculated from the n-gram comparison , the text is categorized on a threshold. Texts are compared by variable length n-gram(n={2,3,4}) comparisons. The experiments show that trigram model gives the average acceptable performance with affordable cost in terms of complexity

Decimal multiplication is an integral part offinancial,
commercial, and internet-based computations. The basic
building block of a decimal multiplier is a single digit
multiplier. It accepts two Binary Coded Decimal (BCD)
inputs and gives a product in the range [0, 81] represented
by two BCD digits. A novel design for single digit decimal
multiplication that reduces the critical path delay and area
is proposed in this research. Out of the possible 256
combinations for the 8-bit input, only hundred
combinations are valid BCD inputs. In the hundred valid
combinations only four combinations require 4 x 4
multiplication, combinations need x multiplication,
and the remaining combinations use either x or x
3 multiplication. The proposed design makes use of this
property. This design leads to more regular VLSI
implementation, and does not require special registers for
storing easy multiples. This is a fully parallel multiplier
utilizing only combinational logic, and is extended to a
Hex/Decimal multiplier that gives either a decimal output
or a binary output. The accumulation ofpartial products
generated using single digit multipliers is done by an array
of multi-operand BCD adders for an (n-digit x n-digit)
multiplication.

The demand for new telecommunication services
requiring higher capacities, data rates and different operating modes
have motivated the development of new generation multi-standard
wireless transceivers. A multi-standard design often involves
extensive system level analysis and architectural partitioning,
typically requiring extensive calculations. In this research, a
decimation filter design tool for wireless communication standards
consisting of GSM, WCDMA, WLANa, WLANb, WLANg and
WiMAX is developed in MATLAB® using GUIDE environment for
visual analysis. The user can select a required wireless
communication standard, and obtain the corresponding multistage
decimation filter implementation using this toolbox. The toolbox
helps the user or design engineer to perform a quick design and
analysis of decimation filter for multiple standards without doing
extensive calculation of the underlying methods.

This paper presents the design and
development of a frame based approach for speech to
sign language machine translation system in the domain
of railways and banking. This work aims to utilize the
capability of Artificial intelligence for the improvement
of physically challenged, deaf-mute people. Our work
concentrates on the sign language used by the deaf
community of Indian subcontinent which is called
Indian Sign Language (ISL). Input to the system is the
clerk’s speech and the output of this system is a 3D
virtual human character playing the signs for the
uttered phrases. The system builds up 3D animation
from pre-recorded motion capture data. Our work
proposes to build a Malayalam to ISL

In this paper, moving flock patterns are mined from
spatio- temporal datasets by incorporating a clustering
algorithm. A flock is defined as the set of data that
move together for a certain continuous amount of time.
Finding out moving flock patterns using clustering
algorithms is a potential method to find out frequent
patterns of movement in large trajectory datasets. In
this approach, SPatial clusteRing algoRithm thrOugh
sWarm intelligence (SPARROW) is the clustering
algorithm used. The advantage of using SPARROW
algorithm is that it can effectively discover clusters of
widely varying sizes and shapes from large databases.
Variations of the proposed method are addressed and
also the experimental results show that the problem of
scalability and duplicate pattern formation is
addressed. This method also reduces the number of
patterns produced

The evolution of wireless sensor network technology has enabled us to develop advanced systems for real time monitoring. In the present scenario wireless sensor networks are increasingly being used for precision agriculture. The advantages of using wireless sensor networks in agriculture are distributed data collection and monitoring, monitor and control of climate, irrigation and nutrient supply. Hence decreasing the cost of production and increasing the efficiency of production. This paper describes the development and deployment of wireless sensor network for crop monitoring in the paddy fields of Kuttanad, a region of Kerala, the southern state of India.