To apply you need to attempt one skill assessment test as recruiter has attached skill assessment test with this job and wants to see your obtained marks,
So be carefull while attempting this skill assessment test

Kapil SharmaSenior Manager - Software Development, PayPal

Suggest a Webinar from this Speaker

Kapil is an engineering leader who specializes in developing end to end digital payments processing experiences. He currently works for PayPal India in the Risk Recoveries and Protection domain. Earlier, Kapil ... Full Profile

Kapil is an engineering leader who specializes in developing end to end digital payments processing experiences.

He currently works for PayPal India in the Risk Recoveries and Protection domain. Earlier, Kapil spent nearly 11+ years in California working for several bay area companies such as Visa, Intuit and Symantec where he led engineering teams to deliver SaaS solutions ranging from recurring subscriptions platform to QuickBooks (accounting) to consumer internet security software.

Kapil is an MBA grad from Santa Clara University, California and studied computer science at Johns Hopkins University (Maryland) and Thapar University (Punjab). show less

There are several use cases in the universe of big data applications that still makes Apache Hadoop the default choice for iterative data processing and ad hoc queries on big data sets. However, the fact is that Hadoopâ€™s MapReduce framework is mostly apt for batch processing and does not fare well for use cases that need immediate insights for faster decision making. For certain companies, every dollar lost in sales to a competitor is hard to win back. For example, a store manager might want to know why no customers turn up to buy donuts at a particular time of the day or, more interestingly, which particular class of customers (e.g students, professionals) show up for donuts at 7 AM and 2 PM. This trivial insight could involve several source data feeds and if the manager wants these data feeds processed multiple times a day at scale, the system will benefit from computational power of Apache Spark augmented with a simple dashboard displaying these insights to the retail manager.

In general, the new-generation big data applications demand lower-latency queries, real-time processing or iterative computations on the same data. In this talk, we would look at some of the core capabilities of Apache Spark, an open source parallel processing framework, which complements Hadoop and allows development of big data applications that can support both batch, streaming and interactive analytics for your large data sets. Key learnings