DHS IEEE 2017 / 18 Projects +91 9845166723

DHS Projects is developing all latest IEEE Projects for Phd, M.Tech, ME, B.Tech, BE, MCA, MSc IT / CS / EC students in various domains. We are developing the projects in NS2, Java, J2EE, J2ME, Android, Dot Net C#, ASP.Net & Embedded System Technologies.
We are the developers and assure the students will get excellent and quality projects from us which will help them to score high mark in their exams.
If you want to develop your base papers please email to us we can develop it.

Thursday, 28 December 2017

Abstract:Nowadays,
a big part of people rely on available con-tent in social media in their
decisions (e.g. reviews and feedback on a topic or product). The possibility
that anybody can leave a review provide a golden opportunity for spammers to
write spam reviews about products and services for different interests.
Identifying these spammers and the spam content is a hot topic of research and
although a considerable number of studies have been done recently toward this
end, but so far the methodologies put forth still barely detect spam reviews,
and none of them show the importance of each extracted feature type. In this
study, we propose a novel framework, named Net Spam, which utilizes spam
features for modeling review data sets as heterogeneous information
networks to map spam detection procedure into a classification problem in
such networks. Using the importance of spam features help us to obtain better
results in terms of different metrics experimented on real-world review data
sets from Yelp and Amazon websites.

Friday, 22 December 2017

Abstract:
With the rapid growth of the amount of information, cloud computing servers
need to process and analyze large amounts of high-dimensional and unstructured
data timely and accurately. This usually requires many query operations. Due to
simplicity and ease of use, cuckoo hashing schemes have been widely used in
real-world cloud-related applications. However due to the potential hash collisions,
the cuckoo hashing suffers from endless loops and high insertion latency, even
high risks of re-construction of entire hash table. In order to address these
problems, we propose a cost-efficient cuckoo hashing scheme, called Min Counter.
The idea behind Min Counter is to alleviate the occurrence of endless loops in
the data insertion by selecting un-busy kicking-out routes. Min Counter selects
the “cold” (infrequently accessed), rather than random, buckets to handle hash
collisions. We further improve the concurrency of the Min Counter scheme to
pursue higher performance and adapt to concurrent applications. Min Counter has
the salient features of offering efficient insertion and query services and
delivering high performance of cloud servers, as well as enhancing the
experiences for cloud users. We have implemented Min Counter in a large-scale
cloud test bed and examined the performance by using three real-world traces.
Extensive experimental results demonstrate the efficacy and efficiency of
Min Counter.

The behaviours
of users in LBSNs are mainly checking in POIs, and these checking in behaviors are
influenced by user’s behavior habits and his/her friends. In social networks,
social influence is often used to help businesses to attract more users. Each
target user has a different influence on different POI in social networks.

This paper
selects the list of POIs with the greatest influence for recommending users.
Our goals are to satisfy the target user’s service need, and simultaneously to
promote businesses’ locations (POIs). This paper defines a POI recommendation
problem for location promotion.

Additionally,
we use sub-modular properties to solve the optimization problem. At last, this
paper conducted a comprehensive performance evaluation for our method using two
real LBSN datasets. Experimental results show that our proposed method achieves
significantly superior POI recommendations comparing with other
state-of-the-art recommendation approaches in terms of location promotion.For more similar projects click here

Thursday, 16 November 2017

Abstract: Attribute-based encryption (ABE) has been widely used in
cloud computing where a data provider outsources his/her encrypted data to a
cloud service provider, and can share the data with users possessing specific
credentials (or attributes). However, the standard ABE system does not support
secure deduplication, which is crucial for eliminating duplicate copies of
identical data in order to save storage space and network bandwidth. In this
paper, we present an attribute-based storage system with secure deduplication
in a hybrid cloud setting, where a private cloud is responsible for duplicate
detection and a public cloud manages the storage. Compared with the prior data
deduplication systems, our system has two advantages. Firstly, it can be used
to confidentially share data with users by specifying access policies rather
than sharing decryption keys. Secondly, it achieves the standard notion of
semantic security for data confidentiality while existing systems only achieve
it by defining a weaker security notion. Read More

Monday, 6 November 2017

Abstract: This paper discusses the concept of a smart wearable device for little
children. The major advantage of this wearable over other wearable is that it
can be used in any cell phone and doesn't necessarily require an expensive
smart phone and not a very tech savvy individual to operate. The purpose of
this device is to help parents locate their children with ease. At the moment
there are many wearable’s in the market which help track the daily activity of
children and also help find the child using Wi-Fi and Bluetooth services
present on the device. But Wi-Fi and Bluetooth appear to be an unreliable
medium of communication between the parent and child. Therefore, the focus of
this paper is to have an SMS text enabled communication medium between the
child's wearable and the parent as the environment for GSM mobile communication
is almost present everywhere. The parent can send a text with specific keywords
such as "LOCATION" "TEMPERA TURE" "UV"
"SOS" "BUZZ", etc., the wearable device will reply back
with a text containing the real time accurate location of the child which upon
tapping will provide directions to the child's location on Google maps app and
will also provide the surrounding temperature, UV radiation index so that the
parents can keep track if the temperature or UV radiation is not suitable for
the child. The Prime motivation behind this project is that we know how
important technology is in our lives but it can sometimes can't be trusted, and
we always need to have a secondary measure at hand. The secondary measure used
in this project is the people present in the surrounding of the child who could
instantly react for the Child’s safety till the parents arrives or they could
contact the parents and help locate them. The secondary measure implemented was
using a bright SOS Light and distress alarm buzzer present on the wearable
device which when activated by the parents via SMS text should display the SOS
signal brightly and sound an alarm which a bystander can easily spot as a sign
of distress.

Abstract:With the rapid growth of the Internet-of-Things(IoT),
concerns about the security of IoT devices have become prominent. Several
vendors are producing IP-connected devices for home and small office networks
that often suffer from Flawed security designs and implementations. They also
tend to lack mechanisms for firmware updates or patches that can help eliminate
security vulnerabilities. Securing networks where the presence of such
vulnerable devices is given, requires a brownfield approach: applying necessary
protection measures within the network so that potentially vulnerable devices
can coexist without endangering the security of other devices in the same
network. In this paper, we present IOT SENTINEL, a system capable of
automatically identifying the types of devices being connected to an IoT
network and enabling enforcement of rules for constraining the communications
of vulnerable devices so as to minimize damage resulting from their compromise.
We show that IOT SENTINEL is effective in identifying device types and has
minimal performance overhead.

Abstract:The requirements for new web applications supporting
different types of devices and purposes are continuously growing. The main
advantages of web application development as well as popular development
features covering integration with different technologies are considered
initially in this paper. Integration and possibilities of application of cloud
based web applications in real scenarios with different embedded Internet of
Things (loT) devices are considered and described in this paper. The design and
implementation of a cloud based web application supporting vehicle toll payment
system using loT device is presented and described. The development framework
as well as featured and popular technologies used to realize a vehicle toll
payment by loT device is described. The concept of vehicle toll payment over an
online payment system is also described. Processing, monitoring and control in
the cloud based web application of such payments using loT devices are
described and presented.

Thursday, 12 January 2017

Abstract:—Cloud data
owners prefer to outsource documents in an encrypted form for the purpose of
privacy preserving. Therefore it is essential to develop efficient and reliable
ciphertext search techniques. One challenge is that the relationship between
documents will be normally concealed in the process of encryption, which will
lead to significant search accuracy performance degradation. Also the volume of
data in data centers has experienced a dramatic growth. This will make it even
more challenging to design ciphertext search schemes that can provide efficient
and reliable online information retrieval on large volume of encrypted data. In
this paper, a hierarchical clustering method is proposed to support more search
semantics and also to meet the demand for fast ciphertext search within a big
data environment. The proposed hierarchical approach clusters the documents
based on the minimum relevance threshold, and then partitions the resulting
clusters into sub-clusters until the constraint on the maximum size of cluster
is reached. In the search phase, this approach can reach a linear computational
complexity against an exponential size increase of document collection. In
order to verify the authenticity of search results, a structure called minimum
hash sub-tree is designed in this paper. Experiments have been conducted using
the collection set built from the IEEE Xplore. The results show that with a
sharp increase of documents in the dataset the search time of the proposed
method increases linearly whereas the search time of the traditional method
increases exponentially. Furthermore, the proposed method has an advantage over
the traditional method in the rank privacy and relevance of retrieved
documents.

Abstract:—Searchable
encryption is of increasing interest for protecting the data privacy in secure
searchable cloud storage. In this work, we investigate the security of a
well-known cryptographic primitive, namely Public Key Encryption with Keyword
Search (PEKS) which is very useful in many applications of cloud storage.
Unfortunately, it has been shown that the traditional PEKS framework suffers
from an inherent insecurity called inside Keyword Guessing Attack (KGA)
launched by the malicious server. To address this security vulnerability, we
propose a new PEKS framework named Dual-Server Public Key Encryption with
Keyword Search (DS-PEKS). As another main contribution, we define a new variant
of the Smooth Projective Hash Functions (SPHFs) referred to as linear and homomorphic
SPHF (LH-SPHF). We then show a generic construction of secure DS-PEKS from
LH-SPHF. To illustrate the feasibility of our new framework, we provide an
efficient instantiation of the general framework from a DDH-based LH-SPHF and
show that it can achieve the strong security against inside KGA.

IEEE 2016: SecRBAC: Secure data in the Clouds

IEEE 2016 Transaction on Cloud Computing

Abstract:—Most current
security solutions are based on perimeter security. However, Cloud computing
breaks the organization perimeters. When data resides in the Cloud, they reside
outside the organizational bounds. This leads users to a loos of control over
their data and raises reasonable security concerns that slow down the adoption
of Cloud computing. Is the Cloud service provider accessing the data? Is it
legitimately applying the access control policy defined by the user? This paper
presents a data-centric access control solution with enriched role-based
expressiveness in which security is focused on protecting user data regardless
the Cloud service provider that holds it. Novel identity-based and proxy
re-encryption techniques are used to protect the authorization model. Data is
encrypted and authorization rules are cryptographically protected to preserve
user data against the service provider access or misbehavior. The authorization
model provides high expressiveness with role hierarchy and resource hierarchy
support. The solution takes advantage of the logic formalism provided by
Semantic Web technologies, which enables advanced rule management like semantic
conflict detection. A proof of concept implementation has been developed and a
working prototypical deployment of the proposal has been integrated within
Google services.

Monday, 9 January 2017

Abstract:—Keyword-based search in text-rich
multi-dimensional datasets facilitates many novel applications and tools. In
this paper, we consider objects that are tagged with keywords and are embedded
in a vector space. For these datasets, we study queries that ask for the
tightest groups of points satisfying a given set of keywords. We propose a
novel method called ProMiSH (Projection and Multi Scale Hashing) that uses
random projection and hash-based index structures, and achieves high
scalability and speedup. We present an exact and an approximate version of the
algorithm. Our experimental results on real and synthetic datasets show that
ProMiSH has up to 60 times of speedup over state-of-the-art tree-based
techniques.

Thursday, 5 January 2017

Abstract:—In mobile communication, spatial queries pose a
serious threat to user location privacy because the location of a query may
reveal sensitive information about the mobile user. In this paper, we study
approximate k nearest neighbor (kNN) queries where the mobile user queries the
location-based service (LBS) provider about approximate k nearest points of
interest (POIs) on the basis of his current location. We propose a basic
solution and a generic solution for the mobile user to preserve his location
and query privacy in approximate kNN queries. The proposed solutions are mainly
built on the Paillier public-key cryptosystem and can provide both location and
query privacy. To preserve query privacy, our basic solution allows the mobile
user to retrieve one type of POIs, for example, approximate k nearest car
parks, without revealing to the LBS provider what type of points is retrieved.
Our generic solution can be applied to multiple discrete type attributes of
private location-based queries. Compared with existing solutions for kNN
queries with location privacy, our solution is more efficient. Experiments have
shown that our solution is practical for kNN queries.

Wednesday, 4 January 2017

Abstract:PassBYOP is a new graphical password scheme for public terminals that
replaces the static digital images typically used in graphical password systems
with personalized physical tokens, herein in the form of digital pictures
displayed on a physical user-owned device such as a mobile phone. Users present
these images to a system camera and then enter their password as a sequence of
selections on live video of the token. Highly distinctive optical features are
extracted from these selections and used as the password.We present three
feasibility studies of PassBYOP examining its reliability, usability, and
security against observation. The reliability study shows that image-feature
based passwords are viable and suggests appropriate system thresholds—password
items should contain a minimum of seven features, 40% of which must
geometrically match originals stored on an authentication server in order to be
judged equivalent. The usability study measures task completion times and error
rates, revealing these to be 7.5 s and 9%, broadly comparable with prior
graphical password systems that use static digital images. Finally, the
security study highlights PassBYOP’s resistance to observation attack—three
attackers are unable to compromise a password using shoulder surfing, camera
based observation, or malware. These results indicate that Pass- BYOP shows
promise for security while maintaining the usabilityof current graphical
password schemes.

Abstract:—Due to its wide
applications in practice, face recognition has been an active research topic.
With the availability of adequate training samples, many machine learning
methods could yield high face recognition accuracy. However, under the circumstance
of inadequate training samples, especially the extreme case of having only a
single training sample, face recognition becomes challenging. How to deal with
conflicting concerns of the small sample size and high dimensionality in
one-sample face recognition is critical for its achievable recognition accuracy
and feasibility in practice. Being different from conventional methods for
global face recognition based on generalization ability promotion and local
face recognition depending on image segmentation, a single-sample face
recognition algorithm based on Locality Preserving Projection (LPP) feature
transfer is proposed here. First, transfer sources are screened to obtain the
selective sample source using the whitened cosine similarity metric. Secondly,
we project the vectors of source faces and target faces into feature sub-space
by LPP respectively, and calculate the feature transfer matrix to approximate
the mapping relationship on source faces and target faces in subspace. Then,
the feature transfer matrix is used on training samples to transfer the
original macro characteristics to target macro characteristics. Finally, the
nearest neighbor classifier is used for face recognition. Our results based on
popular databases FERET, ORL and Yale demonstrate the superiority of the
proposed LPP feature transfer based one-sample face recognition algorithm when
compared with popular single-sample face recognition algorithms such as (PC)2A
and Block FLDA.

Abstract:—This paper proposes
a novel scheme of reversible data hiding (RDH) in encrypted images using
distributed source coding (DSC). After the original image is encrypted by the
content owner using a stream cipher, the data-hider compresses a series of
selected bits taken from the encrypted image to make room for the secret data.
The selected bit series is Slepian-Wolf encoded using low density parity check
(LDPC) codes. On the receiver side, the secret bits can be extracted if the
image receiver has the embedding key only. In case the receiver has the
encryption key only, he/she can recover the original image approximately with
high quality using an image estimation algorithm. If the receiver has both the
embedding and encryption keys, he/she can extract the secret data and perfectly
recover the original image using the distributed source decoding. The proposed
method outperforms previously published ones.sine similarity metric. Secondly,
we project the vectors of source faces and target faces into feature sub-space
by LPP respectively, and calculate the feature transfer matrix to approximate
the mapping relationship on source faces and target faces in subspace. Then,
the feature transfer matrix is used on training samples to transfer the
original macro characteristics to target macro characteristics. Finally, the
nearest neighbor classifier is used for face recognition. Our results based on
popular databases FERET, ORL and Yale demonstrate the superiority of the
proposed LPP feature transfer based one-sample face recognition algorithm when
compared with popular single-sample face recognition algorithms such as (PC)2A
and Block FLDA.

Abstract:—Authentication based
on passwords is used largely in applications for computer security and privacy.
However, human actions such as choosing bad passwords and inputting passwords
in an insecure way are regarded as ”the weakest link” in the authentication
chain. Rather than arbitrary alphanumeric strings, users tend to choose
passwords either short or meaningful for easy memorization. With web
applications and mobile apps piling up, people can access these applications
anytime and anywhere with various devices. This evolution brings great
convenience but also increases the probability of exposing passwords to
shoulder surfing attacks. Attackers can observe directly or use external
recording devices to collect users’ credentials. To overcome this problem, we
proposed a novel authentication system PassMatrix, based on graphical passwords
to resist shoulder surfing attacks. With a one-time valid login indicator and
circulative horizontal and vertical bars covering the entire scope of
pass-images, PassMatrix offers no hint for attackers to figure out or narrow
down the password even they conduct multiple camera-based attacks. We also
implemented a PassMatrix prototype on Android and carried out real user
experiments to evaluate its memorability and usability. From the experimental
result, the proposed system achieves better resistance to shoulder surfing
attacks while maintaining usability.

Abstract:—Location-based
services are quickly becoming immensely popular. In addition to services based
on users' current location, many potential services rely on users' location
history, or their spatial-temporal provenance. Malicious users may lie about
their spatial-temporal provenance without a carefully designed security system
for users to prove their past locations. In this paper, we present the
Spatial-Temporal provenance Assurance with Mutual Proofs (STAMP) scheme. STAMP
is designed for ad-hoc mobile users generating location proofs for each other
in a distributed setting. However, it can easily accommodate trusted mobile
users and wireless access points. STAMP ensures the integrity and
non-transferability of the location proofs and protects users' privacy. A
semi-trusted Certification Authority is used to distribute cryptographic keys
as well as guard users against collusion by a light-weight entropy-based trust
evaluation approach. Our prototype implementation on the Android platform shows
that STAMP is low-cost in terms of computational and storage resources.
Extensive simulation experiments show that our entropy-based trust model is
able to achieve high collusion detection accuracy.

IEEE 2016 : FRAppE: Detecting Malicious Facebook Applications

IEEE 2016
Transaction on Networking

Abstract:—With 20
million installs a day [1], third-party apps are a major reason for the
popularity and addictiveness of Facebook. Unfortunately, hackers have realized
the potential of using apps for spreading malware and spam. The problem is
already significant, as we find that at least 13% of apps in our dataset are
malicious. So far, the research community has focused on detecting malicious
posts and campaigns.In this paper, we ask the question: given a Facebook
application,can we determine if it is malicious? Our key contribution is in
developing FRAppE—Facebook’s Rigorous Application Evaluator—arguably the first
tool focused on detecting malicious apps on Facebook. To develop FRAppE, we use
information gathered by observing the posting behavior of 111K Facebook apps
seen across 2.2 million users on Facebook. First, we identify a set of features
that help us distinguish malicious apps from benign ones. For example, we find
that malicious apps often share names with other apps, and they typically
request fewer permissions than benign apps. Second, leveraging these
distinguishing features, we show that FRAppE can detect malicious apps with
99.5% accuracy, with no false positives and a low false negative rate (4.1%).
Finally, we explore the ecosystem of malicious Facebook apps and identify
mechanisms that these apps use to propagate. Interestingly, we find that many
apps collude and support each other; in our dataset, we find 1,584 apps
enabling the viral propagation of 3,723 other apps through their posts.
Long-term, we see FRAppE as a step towards creating an independent watchdog for
app assessment and ranking, so as to warn Facebook users before installing
apps.

Abstract:—Mobile
crowdsensing networks have emerged to show elegant data collection capability
in loosely cooperative network. However, in the sense of coverage quality,
marginal works have considered the efficient (less participants) and effective
(more coverage) designs for mobile crowdsensing network. We investigate the
optimal coverage problem in distributed crowdsensing networks. In that, the
sensing quality and the information delivery are jointly considered. Different
from the conventional coverage problem, ours only select a subset of mobile
users, so as to maximize the crowdsensing coverage with limited budget. We
formulate our concerns as an optimal crowdsensing coverage problem, and prove
its NP-completeness. In tackling this difficulty, we also prove the submodular
property in our problem. Leveraging the favorable property in submodular
optimization, we present the greedy algorithm with approximationratio O(√k),
where k is the number of selected users. Such that the information delivery and
sensing coverage ratio could be guaranteed. Finally, we make extensive
evaluations for the proposed scheme, with trace-driven tests. Evaluation
results show that the proposed scheme could outperform the random selection by
2× with a random walk model, and over 3× with real trace data, in terms of
crowdsensing coverage. Besides, the proposed scheme achieves near optimal
solution comparing with the bruteforce search results.

Tuesday, 3 January 2017

Abstract:—With advances
in geo-positioning technologies and geo-location services, there are a rapidly
growing amount of spatiotextual objects collected in many applications such as
location based services and social networks, in which an object is described by
its spatial location and a set of keywords (terms). Consequently, the study of
spatial keyword search which explores both location and textual description of
the objects has attracted great attention from the commercial organizations and
research communities. In the paper, we study two fundamental problems in the
spatial keyword queries: top k spatial keyword search (TOPK-SK), and batch top
k spatial keyword search (BTOPK-SK). Given a set of spatio-textual objects, a
query location and a set of query keywords, the TOPK-SK retrieves the closest k
objects each of which contains all keywords in the query. BTOPK-SK is the batch
processing of sets of TOPK-SK queries. Based on the inverted index and the
linear quadtree, we propose a novel index structure, called inverted linear
quadtree (IL- Quadtree), which is carefully designed to exploit both spatial
and keyword based pruning techniques to effectively reduce the search space. An
efficient algorithm is then developed to tackle top k spatial keyword search.
To further enhance the filtering capability of the signature of linear
quadtree, we propose a partition based method. In addition, to deal with
BTOPK-SK, we design a new computing paradigm which partition the queries into
groups based on both spatial proximity and the textual relevance between
queries. We show that the IL-Quadtree technique can also efficiently support
BTOPK-SK. Comprehensive experiments on real and synthetic data clearly
demonstrate the efficiency of our methods.

Abstract:—The ubiquity
of smartphones has led to the emergence of mobile crowdsourcing tasks such as
the detection of spatial events when smartphone users move around in their
daily lives. However, the credibility of those detected events can be
negatively impacted by unreliable participants with low-quality data.
Consequently, a major challenge in quality control is to discover true events
from diverse and noisy participants’ reports. This truth discovery problem is uniquely
distinct from its online counterpart in that it involves uncertainties in both
participants’ mobility and reliability. Decoupling these two types of
uncertainties through location tracking will raise severe privacy and energy
issues, whereas simply ignoring missing reports or treating them as negative
reports will significantly degrade the accuracy of the discovered truth. In
this paper, we propose a new method to tackle this truth discovery problem
through principled probabilistic modeling. In particular, we integrate the
modeling of location popularity, location visit indicators, truth of events and
three-way participant reliability in a unified framework. The proposed model is
thus capable of efficiently handling various types of uncertainties and automatically
discovering truth without any supervision or the need of location tracking.
Experimental results demonstrate that our proposed method outperforms existing
state-of-the-art truth discovery approaches in the mobile crowdsourcing
environment.

Monday, 2 January 2017

Abstract:—With the rapid
development of location-based social networks (LBSNs), spatial item
recommendation has become an important way of helping users discover
interesting locations to increase their engagement with location-based
services. Although human movement exhibits sequential patterns in LBSNs, most
current studies on spatial item recommendations do not consider the sequential
influence of locations. Leveraging sequential patterns in spatial item
recommendation is, however, very challenging, considering 1) users’ check-in
data in LBSNs has a low sampling rate in both space and time, which renders
existing prediction techniques on GPS trajectories ineffective; 2) the
prediction space is extremely large, with millions of distinct locations as the
next prediction target, which impedes the application of classical Markov chain
models; and 3) there is no existing framework that unifies users’ personal
interests and the sequential influence in a principled manner.In light of the
above challenges, we propose a sequential personalized spatial item
recommendation framework (SPORE) which introduces a novel latent variable
topic-region to model and fuse sequential influence with personal interests in
the latent and exponential space. The advantages of modeling the sequential
effect at the topic-region level include a significantly reduced prediction
space, an effective alleviation of data sparsity and a direct expression of the
semantic meaning of users’ spatial activities. Furthermore, we design an
asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online
top-k recommendation process by extending the traditional LSH. We evaluate the
performance of SPORE on two real datasets and one large-scale synthetic
dataset. The results demonstrate a significant improvement in SPORE’s ability to
recommend spatial items, in terms of both effectiveness and efficiency,
compared with the state-of-the-art methods.

Abstract:—The ubiquity
of smartphones has led to the emergence of mobile crowdsourcing tasks such as
the detection of spatial events when smartphone users move around in their
daily lives. However, the credibility of those detected events can be
negatively impacted by unreliable participants with low-quality data.
Consequently, a major challenge in quality control is to discover true events
from diverse and noisy participants’ reports. This truth discovery problem is
uniquely distinct from its online counterpart in that it involves uncertainties
in both participants’ mobility and reliability. Decoupling these two types of
uncertainties through location tracking will raise severe privacy and energy
issues, whereas simply ignoring missing reports or treating them as negative
reports will significantly degrade the accuracy of the discovered truth. In
this paper, we propose a new method to tackle this truth discovery problem
through principled probabilistic modeling. In particular, we integrate the
modeling of location popularity, location visit indicators, truth of events and
three-way participant reliability in a unified framework. The proposed model is
thus capable of efficiently handling various types of uncertainties and
automatically discovering truth without any supervision or the need of location
tracking. Experimental results demonstrate that our proposed method outperforms
existing state-of-the-art truth discovery approaches in the mobile
crowdsourcing environment.

IEEE 2016 : Sentiment Analysis of Top
Colleges in India Using Twitter Data

IEEE 2016
Transaction on Data Mining

Abstract:—In today’s
world, opinions and reviews accessible to us are one of the most critical
factors in formulating our views and influencing the success of a brand,
product or service. With the advent and growth of social media in the world,
stakeholders often take to expressing their opinions on popular social media,
namely Twitter. While Twitter data is extremely informative, it presents a
challenge for analysis because of its humongous and disorganized nature. This
paper is a thorough effort to dive into the novel domain of performing
sentiment analysis of people’s opinions regarding top colleges in India.
Besides taking additional preprocessing measures like the expansion of net
lingo and removal of duplicate tweets, a probabilistic model based on Bayes’
theorem was used for spelling correction, which is overlooked in other research
studies. This paper also highlights a comparison between the results obtained
by exploiting the following machine learning algorithms: Naïve Bayes and
Support Vector Machine and an Artificial Neural Network model: Multilayer
Perceptron. Furthermore, a contrast has been presented between four different
kernels of SVM: RBF, linear, polynomial and sigmoid.

IEEE 2016 : FRAppE: Detecting Malicious
Facebook Applications

IEEE 2016 Transaction on Data Mining

Abstract:—With 20
million installs a day [1], third-party apps are a major reason for the
popularity and addictiveness of Facebook. Unfortunately, hackers have realized
the potential of using apps for spreading malware and spam. The problem is
already significant, as we find that at least 13% of apps in our dataset are
malicious. So far,the research community has focused on detecting malicious
posts and campaigns. In this paper, we ask the question: given a Facebook
application, can we determine if it is malicious? Our key contribution is in
developing FRAppE—Facebook’s Rigorous Application Evaluator— arguably the first
tool focused on detecting malicious apps on Facebook. To develop FRAppE, we use
information gathered by observing the posting behavior of 111K Facebook apps
seen across 2.2 million users on Facebook. First, we identify a set of features
that help us distinguish malicious apps from benign ones. For example, we
find that malicious apps often share names with other apps, and they typically
request fewer permissions than benign apps. Second, leveraging these
distinguishing features, we show that FRAppE can detect malicious apps with
99.5% accuracy, with no false positives and a low false negative rate (4.1%).
Finally, we explore the ecosystem of malicious Facebook apps and identify
mechanisms that these apps use to propagate. Interestingly, we find that many
apps collude and support each other; in our dataset, we find 1,584 apps
enabling the viral propagation of 3,723 other apps through their posts.
Long-term, we see FRAppE as a step towards creating an independent watchdog for
app assessment and ranking,so as to warn Facebook users before installing apps.

Thursday, 29 December 2016

Abstract—Cloud computing
enables an economically promising paradigm of computation outsourcing. However,
how to protect customers confidential data processed and generated during the
computation is becoming the major security concern. Focusing on engineering
computing and optimization tasks, this paper investigates secure outsourcing of
widely applicable linear programming (LP) computations. Our mechanism design
explicitly decomposes LP computation outsourcing into public LP solvers running
on the cloud and private LP parameters owned by the customer. The resulting
flexibility allows us to explore appropriate security/efficiency tradeoff via
higher-level abstraction of LP computation than the general circuit
representation. Specifically, by formulating private LP problem as a set of
matrices/vectors, we develop efficient privacy-preserving problem
transformation techniques, which allow customers to transform the original LP
into some random one while protecting sensitive input/output information. To
validate the computation result, we further explore the fundamental duality
theorem of LP and derive the necessary and sufficient conditions that correct
results must satisfy. Such result verification mechanism is very efficient and
incurs close-to-zero additional cost on both cloud server and customers.
Extensive security analysis and experiment results show the immediate
practicability of our mechanism design.

Abstract—The MapReduce programming model simplifies large-scale data processing on
commodity cluster by exploiting parallel map tasks and reduce tasks. Although
many efforts have been made to improve the performance of MapReduce jobs, they
ignore the network traffic generated in the shuffle phase, which plays a
critical role in performance enhancement. Traditionally, a hash function is used to partition intermediate data among reduce tasks, which, however, is
not traffic-efficient because network topology and data size associated with
each key are not taken into consideration. In this paper, we study to reduce
network traffic cost for a MapReduce job by designing a novel intermediate data
partition scheme. Furthermore, we jointly consider the aggregator placement
problem, where each aggregator can reduce merged traffic from multiple map
tasks. A decomposition-based distributed algorithm is proposed to deal with the
large-scale optimization problem for big data application and an online
algorithm is also designed to adjust data partition and aggregation in a
dynamic manner. Finally, extensive simulation results demonstrate that our
proposals can significantly reduce network traffic cost under both offline and
online cases.

Abstract—Dynamic Proof of Storage (PoS) is a useful cryptographic
primitive that enables a user to check the integrity of out sourced files and
to efficiently update the files in a cloud server. Although researchers have
proposed many dynamic PoS schemes in single user environments, the problem in
multi-user environments has not been investigated sufficiently. A practical
multi-user cloud storage system needs the secure client-side cross-user
deduplication technique, which allows a user to skip the uploading process and
obtain the ownership of the files immediately, when other owners of the same
files have uploaded them to the cloud server. To the best of our knowledge,
none of the existing dynamic PoSs can support this technique. In this paper, we
introduce the concept of deduplicatable dynamic proof of storage and propose an
efficient construction called DeyPoS, to achieve dynamic PoS and secure
cross-user deduplication, simultaneously. Considering the challenges of
structure diversity and private tag generation, we exploit a novel tool called
Homomorphic Authenticated Tree (HAT). We prove the security of our
construction, and the theoretical analysis and experimental results show that
our construction is efficient in practice.

Abstract—In this paper, we introduce a new fine-grained two-factor authentication
(2FA) access control system for web-based cloud computing services.
Specifically, in our proposed 2FA access control system, an attribute-based
access control mechanism is implemented with the necessity of both a user
secret key and a lightweight security device. As a user cannot access the
system if they do not hold both, the mechanism can enhance the security of the
system, especially in those scenarios where many users share the same computer
for web-based cloud services. In addition, attribute-based control in the
system also enables the cloud server to restrict the access to those users with
the same set of attributes while preserving user privacy, i.e., the cloud
server only knows that the user fulfills the required predicate, but has no
idea on the exact identity of the user. Finally, we also carry out a simulation
to demonstrate the practicability of our proposed 2FA system.