Resume Assistant: The Collaboration Between Microsoft and LinkedIn

Resume Assistant was built as a hybrid feature inside Word applications by both LinkedIn engineers and Microsoft engineers. The Microsoft team took ownership of everything that happens in and is heavily-embedded in Word, e.g., Resume Classifier, Resume Info Extractor, the onboarding page, etc., while LinkedIn engineers focused on building the main UI, service, and relevance model of the Resume Assistant feature.

Let’s introduce Resume Classifier and Resume Info Extractor first. The Resume Classifier detects if a user opens a resume in Word and then triggers the opening of the LinkedIn Resume Assistant onboarding page. Though the Resume Assistant onboarding page is a very simple, static web page with a button to ask for the user’s consent, it is a key component of protecting the user’s privacy. Unlike other pages in Resume Assistant, this onboarding page is hosted by Microsoft. Without the user’s consent from this onboarding page, Word users won’t enter the LinkedIn-powered Resume Assistant experience, and no user’s data is sent to LinkedIn.

After the user consents to use Resume Assistant, the Resume Assistant retrieves the job title and other information that has been extracted by the Microsoft Resume Info Extractor. The Resume Assistant gets this data through the Office JavaScript API. Automatically retrieving this data smooths the user experience in Resume Assistant. For example, the user’s title/locale is used to pull the right set of work experience examples to display in the panel, along with a set of suggested skills to cover in the resume and a list of jobs that may be of interest to the user. These suggestions are based on LinkedIn’s deep understanding of its professional network and jobs market.

With the Microsoft team implementing the Resume Classifier and Resume Info Extractor components, how could LinkedIn engineers build the Resume Assistant service into Word without learning Word development from scratch?

Leveraging Office Add-in Platform

Usually, Word features are developed natively. If we went with a native approach, the Resume Assistant UI would talk to Microsoft’s data centers, which would in turn talk to LinkedIn to fetch data via Rest.li Gateway (Rest.li is a REST+JSON framework for building robust, scalable service architectures. Rest.li Gateway is LinkedIn’s API externalization platform). For example, LinkedIn created an Outlook profile card integration with code written directly in Outlook that talks to LinkedIn via Microsoft’s data centers.

Another approach that was available was to develop on the Office Add-in Platform. The Office Add-in Platform exposes and bridges the native API through a JavaScript library. This gives developers the ability to build additional functionality for Office products through the use of web technologies. With this, we would be able to create a single page application entirely on the LinkedIn stack that could still interface with the Word document.

Creating our experience as an Office Add-in would allow us to iterate on it much more quickly than a native implementation, because it would allow us to utilize our own web deployment pipeline. It’s a lot easier to update a web application frequently than it is to update a native application installed on someone’s machine. We can push changes to a web application more seamlessly than to a native application because a user does not need to be involved in downloading code. And since it’s a web application, we could implement the Resume Assistant web code once, and all Word applications (Word for Windows, Word for Mac, and Word Online) would be able to show the tool.

Additionally, code written in Word would be owned and developed by Microsoft engineers, while the LinkedIn APIs would be developed by LinkedIn engineers. Using the Office Add-in Platform would eliminate some of the interfacing and coordination challenges of the native approach.

From a site speed perspective, it was not clear which option was best (without building both and comparing). A native experience would load faster initially—our integrations code would be bundled as part of the Word application that is downloaded by the user once. However, when accessing LinkedIn data via an HTTP call, we would have an additional hop between the Microsoft and LinkedIn data centers, increasing our latency for subsequent page loads.

For the Resume Assistant tool, we decided to implement the product as an Office Add-in because it allows LinkedIn to iterate more quickly. This was one of our first Microsoft integrations, so we didn’t know for sure what would work best for the product. With the plugin approach, we have the flexibility to quickly make changes as needed. At LinkedIn, we have learned that making some simple UI changes can have a big impact on user engagement, so it’s important to have the flexibility to deploy UI changes quickly.

Choosing a frontend tech stack within LinkedIn

At LinkedIn, we have two main stacks for frontend external-facing applications. The first stack is called Pemberly. This stack uses a Java Play API server to provide Rest.li data to our UI code. Our frontend was written in Ember.js and is served from another Play server that uses a pool of node processes to make optimizations for first page load. These optimizations include streaming the initial API call data along with the HTML and Ember application, and performing server-side rendering. This stack is primarily used for our rich, member-facing applications, like the logged-in LinkedIn.com. Its main advantage is the use of Ember to create a Single Page Application (SPA) in the web browser.

The other frontend stack we use is a simple Play server returning server-side rendered Dust templates. This stack is primarily used for SEO purposes and for our guest webpages, where site speed is king.

For our Office integration, we felt like the Pemberly stack (first option) was the best approach. Using Ember allows us to build a very rich UI by getting a lot of Single Page Application features (like routing) for free and make the Resume Assistant experience more of a native experience.

Powering Resume Assistant with LinkedIn data

Through user studies, we discovered that job seekers have trouble finding the right words to describe their experience on their resumes. To help prompt users in Resume Assistant, we suggest deidentified work experience descriptions, derived from public LinkedIn profiles. The data is only obtained from member profiles where the members have chosen to keep their profile visibility to public, and where the member’s position description is also visible publicly. Additionally, it is possible to completely opt out of providing this information, by switching the “Microsoft word” setting to off, from the settings page. To power this feature, we needed to create a new LinkedIn backend that would use all of our public profile information to choose good descriptions for what to write on a resume, given a job title. To implement this, we considered two options.

First, we could create a new search stack to power a relevance service for this feature. This option provided the best long-term flexibility, because we could perform all sorts of queries easily. LinkedIn has experience with search—for example, we already have people search and job search functions. However, we could not easily leverage these existing search services because they had different goals. For example, people search favors full profiles relevant to a query. But for Resume Assistant, we would prefer to favor profiles with at least one good position description relevant to the input title. A new search stack would involved a lot of different new infrastructure pieces, and likely more SRE support to maintain the service.

The approach we decided to take instead is much easier to implement initially, but provides less future flexibility. Instead of a search stack, we created a new key-value store to hold a mapping of titles to a list of work experience descriptions. We choose to use Venice, a LinkedIn-derived database similar to Voldemort Read-Only. We populated the Venice store with an offline Hadoop script that uses public profile data to derive a title for each work experience description, and we then ranked the positions into an ordered list. With this data being stored, requests to query the work experience examples can be served as simply as a key-value lookup.

Accessibility

Providing an accessible and inclusive experience to all users is a fundamental belief of both Microsoft and LinkedIn. We took this belief to heart during the development of the Resume Assistant. We worked closely with LinkedIn and Microsoft’s accessibility teams to identify, triage, and ultimately fix issues raised during the UI development cycle of the project.

We stumbled upon a unique set of accessibility problems during development that were due to the numerous platforms the pane would render on. Because the application rendered in a webview in native work on both MacOS and Windows, the most difficult issues we encountered involved screen readers inconsistently treating our app as either native or as a web document. With assistance from both the SDX and accessibility teams (and a lengthy investigation), we were able to mitigate the inconsistency and have the pane behave the way a user of a screen reader would expect.