Bhargavi R. Upadhyay

Asst. Professor, Computer Science, School of Engineering, Bengaluru

Qualification:

M.Tech, BE

Email:

u_bhargavi@blr.amrita.edu

Bhargavi Upadhyay currently serves as Assistant Professor at the department of Computer Science and Engneering, Amrita School of Engineering, Bengaluru. She is currently pursuing her Ph.D. in cache coherence protocol design. Her research interest are in Memory design and processor architecture.

Education

2011: M. Tech. (Embedded System)
Amrita Vishwa Vidyapeetham

2005: B. E. (Computer Engineering)
North Gujarat University

Invited Talks

Deliver a talk on gem5 multicore simulator in 2-Day National Level Workshop on ‘Computer Architecture and High Performance Computing’ was conducted by the Department of Computer Science and Engineering on March 15 - 16, 2013.

Cache memory plays a major role in memory hierarchy for improving the system performance. Cache configuration includes cache size, associativity, block size, replacement policy and write policies. Selection of different values for all these parameters decide the performance, energy consumption and chip area of the system for the given application. Finding the best cache configuration for application involves the cache design space exploration. Cache design space is time consuming because it contains all combination of cache parameters. This paper surveys different techniques to find the efficient design space aimed at reducing the design space time and provide good insight to researchers to explore further.

Cache memory is a main component of memory hierarchy which plays an important role in the overall performance of the system and in the design of multicores. Multicores with shared memory architecture are used to satisfy increasing performance demands, which in turn is limited by cache coherence problem. This survey gives a comprehensive view and analysis on the various cache coherence mechanisms in modern architectures. With the availability of several cache coherence mechanisms, the selection of an approach depends on various parameters under consideration like storage, scalability, traffic, latency, energy etc. This article surveys the different cache coherence approaches and future design directions for improving the cache coherence mechanism.

The power consumption is crucial for embedded applications which are operated by battery. This paper compares the behavior of instruction cache and data cache of predictive placement scheme using minimal prediction bits on energy efficiency and performance. The performance of predictive placement scheme is evaluated and compared with Way Prediction for instruction cache and data cache. Using proposed scheme, an average energy saving of 71.6% for data cache and 64% for instruction cache can be achieved over conventional set-associative cache scheme. Simplescalar 3.0 simulator is used to obtain the results for Mibench embedded benchmarks

In this network-based computing era,software applications are playing a major role. With this role,the cryptography of these algorithm is also of a much concern. Cryptographic algorithms are complex and can consume lot of time and hence, it can benefit from parallelism. But parallelism is not that simple, as sometimes it is not possible to convert the whole algorithm into a parallel one. So in this paper we are proposing a modified RC4 algorithm which can be made fully parallel. This modified algorithm is compared with other cryptographic algorithms i.e., AES, DES and RSA for its speed and time complexity.This algorithm is also implemented in Cuda, MPI and OpenMP and the results are compared.