Monthly Archives: August 2016

Google Scholar Library allows you to build your personal collection of articles within Scholar. You can save articles right from the search page, organize them with labels, and use the power of Scholar's full-text search & ranking to quickly find just the one you want. You decide what goes into your library and we provide all the goodies that come with Scholar search results - up to date article links, citing articles, related articles, formatted citations, links to your university’s subscriptions, and more.

As personal libraries have grown over time, managing them takes more effort. Today we are making organizing your library easier by making it possible to update or export multiple articles with a single click. For example, if you are writing a new paper, you can quickly export the articles to cite to your favorite reference manager; if you are grouping papers that explore different aspects of your research area, you can select all papers in a sub-field and label them with one click.

Google Scholar Library allows you to build your personal collection of articles within Scholar. You can save articles right from the search page, organize them with labels, and use the power of Scholar's full-text search & ranking to quickly find just the one you want. You decide what goes into your library and we provide all the goodies that come with Scholar search results - up to date article links, citing articles, related articles, formatted citations, links to your university’s subscriptions, and more.

As personal libraries have grown over time, managing them takes more effort. Today we are making organizing your library easier by making it possible to update or export multiple articles with a single click. For example, if you are writing a new paper, you can quickly export the articles to cite to your favorite reference manager; if you are grouping papers that explore different aspects of your research area, you can select all papers in a sub-field and label them with one click.

Starting on September 14th, 2016, Productstatuses will return all the errors and warnings present in the Diagnostics tab of Merchant Center. To explain the changes, first we separate errors and warnings for products into two categories:

Validation issues are reported as a result of product insertion and update, and they include issues such as missing required fields or invalid item IDs.

Data quality issues are reported later after either automatic or manual review of the data provided to Merchant Center, and they include issues such as product images being too generic or a non-matching price on the landing page.

Historically, only data quality issues were available from Productstatuses, while validation issues required the developer to check the returned status from the insertion or update of the product. After this change, developers can retrieve both data quality and validation issues using Productstatuses, but this means that developers may see more issues returned by Productstatuses than before.

Please note that the includeInvalidInsertedItems URL parameter to Productstatuses.list still defaults to false. This means that, by default, calls only retrieve products which had no validation errors, and thus the retrieved products will only include data quality issues and validation warnings (if any). If you want to retrieve information about products with validation errors, please set this parameter to true.

If you have any questions or feedback about the changes to issue reporting or other questions about the Content API for Shopping, please let us know on the forum.

The beta channel has been updated to 53.0.2785.89 for Windows, Mac, and Linux.

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

The Chrome team is delighted to announce the promotion of Chrome 53 to the stable channel for Windows, Mac and Linux. This will roll out over the coming days/weeks.

Chrome 53.0.2785.89 contains a number of fixes and improvements -- a list of changes is available in thelog. Watch out for upcomingChrome andChromium blog posts about new features and big efforts delivered in 53.

Security Fixes and Rewards

Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

This update includes 33 security fixes Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information

[$7500][628942] High CVE-2016-5147: Universal XSS in Blink. Credit to anonymous

[$7500][621362] High CVE-2016-5148: Universal XSS in Blink. Credit to anonymous

Posted by Alex Alemi, Software Engineer Earlier this week, we announced the latest release of the TF-Slim library for TensorFlow, a lightweight package for defining, training and evaluating models, as well as checkpoints and model definitions for several competitive networks in the field of image classification.

Residual connections allow shortcuts in the model and have allowed researchers to successfully train even deeper neural networks, which have lead to even better performance. This has also enabled significant simplification of the Inception blocks. Just compare the model architectures in the figures below:

Schematic diagram of Inception V3

Schematic diagram of Inception-ResNet-v2

At the top of the second Inception-ResNet-v2 figure, you'll see the full network expanded. Notice that this network is considerably deeper than the previous Inception V3. Below in the main figure is an easier to read version of the same network where the repeated residual blocks have been compressed. Here, notice that the inception blocks have been simplified, containing fewer parallel towers than the previous Inception V3.

The Inception-ResNet-v2 architecture is more accurate than previous state of the art models, as shown in the table below, which reports the Top-1 and Top-5 validation accuracies on the ILSVRC 2012 image classification benchmark based on a single crop of the image. Furthermore, this new model only requires roughly twice the memory and computation compared to Inception V3.

As an example, while both Inception V3 and Inception-ResNet-v2 models excel at identifying individual dog breeds, the new model does noticeably better. For instance, whereas the old model mistakenly reported Alaskan Malamute for the picture on the right, the new Inception-ResNet-v2 model correctly identifies the dog breeds in both images.

We are excited to see what the community does with this improved model, following along as people adapt it and compare its performance on various tasks. Want to get started? See the accompanying instructions on how to train, evaluate or fine-tune a network.

As always, releasing the code was a team effort. Specific thanks are due to:

Hutch is a London based mobile studio focusing entirely on racing games, with
more than 10 million players on Google Play. For their latest game, MMX Hill
Climb, they used A/B testing and game analytics to improve the game design and
experience resulting in more than 48 mins daily active usage per user.

Watch Shaun Rutland, CEO, and Robin Scannell, Games Analyst, explain how they
were able to deliver a more engaging user experience in this video.

As a web engineer at Google, I've been creating scaled systems for internal teams and customers for the past five years. Often these include a web front and back-end component. I would like to share with you a story about creating a bespoke machine learning (ML) system using the Google Cloud Platform stack — and hopefully inspire you to build some really cool web apps of your own.

The story starts with my curiosity for computer vision. I've been fascinated with this area for a long time. Some of you may have even seen my public posts from my personal experiments, where I strive to find the most simple solution to achieve a desired result. I'm a big fan of simplicity, especially as the complexity of my projects has increased over the years. A good friend once said to me, “Simplicity is a complex art,” and after ten years in the industry, I can say that this is most certainly true.

Some of my early experiments in computer vision attempting to isolate movement

My background is as a web engineer and computer scientist, getting my start back in 2004 on popular stacks of the day like LAMP. Then, in 2011 I joined Google and was introduced to the Google Cloud stack, namely Google App Engine. I found that having a system that dealt with scaling and distribution was a massive time saver, and have been hooked on App Engine ever since.

But things have come a long way since 2011. Recently, I was involved in a project to create a web-based machine learning system using TensorFlow. Let’s look at some of the newer Google Cloud technologies that I used to create it.

Problem: how to guarantee job execution for both long running and shorter time critical tasks

Earlier in the year I was learning how to use TensorFlow — an open source software library for machine intelligence developed by Google (which is well worth checking out by the way). Once I figured out how to get TensorFlow working on Google Compute Engine, I soon realized this thing was not going to scale on its own — several components needed to be split out into their own servers to distribute the load.

Initial design and problem

In my application, retraining parts of a deep neural network was taking about 30 minutes per job on average. Given the potential for long running jobs, I wanted to provide status updates in real-time to the user to keep them informed of progress.

I also needed to analyze images using classifiers that had already been trained, which typically takes less than 100ms per job. I could not have these shorter jobs blocked by the longer running 30-minute ones.

It was possible to create a Compute Engine auto-scaling pool of up to 10 instances depending on demand, but if 10 long-running training tasks were requested, then there wouldn’t be any instances available for classification or file upload tasks.

Due to budget constraints for the project, I couldn’t fire up more than 10 instances at a time.

Database options
In addition to having to support many different kinds of workloads, this application required being able to store persistent data. There are a number of databases that support this, the most obvious of which is Google Cloud SQL. However, I had a number of issues with this approach:

Time investment. Using Cloud SQL would have meant writing all that DB code to integrate with a SQL database myself, and I needed to provide a working prototype ASAP.

Security. Cloud SQL integration would have required the Google Compute Engine instances to have direct access to the core database, which I did not want to expose.

Heterogeneous jobs. It’s 2016 and surely there's something that solves this issue already that could work with different job types?

My solution was to use Firebase, Google’s backend as a service offering for creating mobile and web applications. Firebase allowed me to use their existing API to persist data using JSON objects (perfect for my Node.js based server), which allowed the client to listen to changes to DB (perfect for communicating status updates on jobs), and did not require tightly coupled integration with my core Cloud SQL database.

My Google Cloud Platform stack

I ended up splitting the server into three pools that were highly specialized for a specific task: one for classification, one for training, and one for file upload. Here are the cloud technologies I used for each task:

Firebase
I had been eyeing an opportunity to use Firebase on a project for quite some time after speaking with James Tamplin and his team. One key feature of Firebase is that it allows you to create a real-time database in minutes. That’s right, real time, with support for listening for updates to any part of it, just using JavaScript. And yes, you can write a working chat application in less than 5 minutes using Firebase! This would be perfect for real-time job status updates as I could just have the front-end listen for changes to the job in question and refresh the GUI. What’s more, all the websockets and DB fun is handled for you, so I just needed to pass JSON objects around using a super easy-to-use API — Firebase even handles going offline, syncing when connectivity is restored.

Cloud Pub/Sub
My colleagues Robert Kubis and Mete Atamel introduced me to Google Cloud Pub/Sub, Google’s managed real-time messaging service. Cloud Pub/Sub essentially allows you to send messages to a central topic from which your Compute Engine instances can create a subscription and pull/push from/to asynchronously in a loosely coupled manner. This guarantees that all jobs will eventually run, once capacity becomes available, and it all happens behind the scenes so you don't have to worry about retrying the job yourself. It’s a massive time-saver.

Any number of endpoints can be Cloud Pub/Sub publishers and pull subscribers

App Engine
This is where I hosted and delivered my front-end web application — all of the HTML, CSS, JavaScript and theme assets are stored here and scaled automatically on-demand. Even better, App Engine is a managed platform with built-in security and auto-scaling as you code against the App Engine APIs in your preferred language (Java, Python, PHP etc). The APIs also provide access to advanced functionality such as Memcache, Cloud SQL and more without having to worry about how to scale them as load increases.

Compute Engine with AutoScaling
Compute Engine is probably what most web devs are familiar with. It’s a server on which you can install your OS of choice and get full root access to that instance. The instances are fully customizable (you can configure how many vCPUs you desire, as well as RAM and storage) and are charged by the minute — for added cost savings when you scale up and down with demand. Clearly, having root access means you can do pretty much anything you could dream of on these machines, and this is where I chose to run my TensorFlow environment. Compute Engine also benefits from autoscaling, increasing and decreasing the number of available Compute Engine instances with demand or according to a custom metric. For my use case, I had an autoscaler ranging from 2 to 10 instances at any given time, depending on average CPU usage.

Cloud StorageGoogle Cloud Storage is an inexpensive place in which you can store a large number of files (both in size and numbers) that are replicated to key edge server locations around the globe, closer to the requesting user. This is where I stored the uploaded files used to train the classifiers in my machine learning system until they were needed.

Network Load Balancer
My JavaScript application was making use of a webcam, and I therefore needed to access it over a secure connection (HTTPS). Google’s Network Load Balancer allows you to route traffic to the different Compute Engine clusters that you have defined. In my case, I had a cluster for classifying images, and a cluster for training new classifiers, and so depending on what was being requested, I could route that request to the right backend, all securely, via HTTPS.

Putting it all together

After putting all these components together, my system architecture looked roughly like this:

While this worked very well, some parts were redundant. I discovered that the Google Compute Engine Upload Pool code could be re-written to just run on App Engine in Java, pushing directly to Cloud Storage, thus taking out the need for an extra pool of Compute Engine instances. Woohoo!

In addition, now that I was using App Engine, the custom SSL load balancer was also redundant as App Engine itself could simply push new jobs to Pub/Sub internally, and deliver any front-end assets over HTTPS out of the box via appspot.com. Thus, the final architecture should look as follows if deploying on Google’s appspot.com:

Reducing the complexity of the architecture will make it easier to maintain, and add to cost savings.

Conclusion

By using Pub/Sub and Firebase, I estimate I saved well over a week’s development time, allowing me to jump in and solve the problem at hand in a short timeframe. Even better, the prototype scaled with demand, and ensured that all jobs would eventually be served even when at max capacity for budget.

Combining the Google Cloud Platform stack provides the web developer with a great toolkit for prototyping full end-to-end systems at rapid speed while aiding security and scalability for the future. I highly recommend you try them out for yourself.

If you've not heard of the Closure Compiler, it's a JavaScript optimizer, transpiler and type checker, which compiles your code into a high-performance, minified version. Nearly every web frontend at Google uses it to serve the smallest, fastest code possible.

It supports new features in ES2015, such as let, const, arrow functions, and provides polyfills for ES2015 methods not supported everywhere. To help you write better, maintainable and scalable code, the compiler also checks syntax, correct use of types, and provides warnings for many JavaScript gotchas. To find out more about the compiler itself, including tutorials, head to Google Developers.

How does this work?

This isn't a rewrite of Closure in JavaScript. Instead, we compile the Java source to JS to run under Node, or even inside a plain old browser. Every post or resource you see about Closure Compiler will also apply to this version.

Note that the JS version is experimental. It may not perform in the same way as the native Java version, but we believe it's an interesting new addition to the compiler landscape, and the Closure team will be working to improve and support it over time.

How can I use it?

To include the JS version of Closure Compiler in your project, you should add it as a dependency of your project via NPM-

If you'd like to migrate from google-closure-compiler (which requires Java), you'll have to use gulp.src() or equivalents to load your JavaScript before it can be compiled. As this compiler runs in pure JavaScript, the compiler cannot load or save files from your filesystem directly.

For more information, check out Usage, supported Flags, or a demo project. Not all flags supported in the Java release are currently available in this experimental version. However, the compiler will let you know via exception if you've hit any missing ones.

Whether you were enjoying a Classic Rock Summer with a campfire cookout or having the Best Summer Ever while chilling by the pool, Google Play Music helped you find a selection of summer hits to match your mood and activities, both day and night.

As we close out this year’s season of sun and fun, Google Play Music is releasing our official 2016 Songs of the Summer list, featuring the most streamed songs on our music service.

Drake took “controlla” and rose to the top of the Google Play Music charts with just “One Dance.”

Using data compiled from Memorial Day Weekend through August 28, the Canadian hip hop artist’s song with WizKid and Kyla is the most popular Song of the Summer on both Google Play Music’s U.S. and global lists. The Chainsmokers didn’t let anyone down as their song made the #2 slot and Twenty One Pilots is not “stressed out” as fans came along for the “Ride” to help two of their songs make the top 10.

The full list of Google Play’s most streamed songs of summer 2016 is below and if you still “need a one dance” before the summer ends, check out the 2016 Summer Dance Party playlist on Google Play featuring these top tracks.

Google Play Global Top 10 Streamed Songs of Summer 2016

From Tokyo to Toronto, Google Play Music listeners in over 60 countries wanted “One Dance” with Drake.