Tag: google

With a feature called Shush, Android P will automatically silence your calls and notifications when you flip your phone over, screen side down. To put the phone down you just … put the phone down. “We heard from people that they checked their phone right before bed, and before they knew it, an hour or two went by,” say Sameer Samat, VP of product management at Google. Google and Apple have both already introduced warm, color shifting modes at night so that your phone’s blue light doesn’t disrupt your natural sleep cycle. With Digital Wellbeing, Google is doing more with a feature called Wind Down mode that turns your phone gray. You set Wind Down when you’d like to go to bed, and Android P will shift into a gray-scale palette that takes some of that slot machine-style delight out of your phone. Roid P will have a personalized data visualization of your actual phone usage, from how many times you checked it in a day, to how many push notifications you received. Exactly how specific this tracking will get is still a bit unclear-I can imagine something like “John spent six hours watching anime cartoons, and 15 minutes watching geometry proofs”-but Google wants to push what it calls “Meaningful engagement,” not just garbage time on your phone.

“Data is very polluting,” says Joana Moll, an artist-researcher whose work investigates the physicality of the internet. “Almost nobody recalls that the internet is made up of interconnected physical infrastructures which consume natural resources,” Moll writes as an introduction to the project. CO2GLE uses 2015 internet traffic data, Moll says, and is based on the assumption that Google.com “Processes an approximate average of 47,000 requests every second, which represents an estimated amount of 500 kg of CO2 emissions per second.” That would be about 0.01 kg per request. One estimate from British environmental consultancy Carbonfootprint puts it between 1g and 10g of CO2 per Google search. Speaking at the Internet Media Age conference in Barcelona last week, Moll showed another visualization, which she calls “DEFOOOOOOOOOOOOOOOOOOOOOREST,” to drive home the point. Moll’s research focused on Google because of its scale, but other websites also contribute to the internet’s carbon footprint. “What I’m really trying to do is to trigger thoughts and reflections on the materiality of data and materiality of our direct usage of the internet,” Moll says. “To calculate the CO2 of the internet is really complicated. It’s the biggest infrastructure ever been built by humanity and it involves too many actors. numbers that can serve to raise awareness.”

Does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear? And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls have to deal with some idiot robot? Onstage, Google didn’t talk much about the details of how the feature, called Duplex, works, but an accompanying blog post adds some important context. Mark Riedl, an associate professor of AI at Georgia Tech with a specialism in computer narratives, told The Verge that he thought Google’s Assistant would probably work “Reasonably well,” but only in formulaic situations. In its blog post, Google says Duplex has a “Self-monitoring capability” that allows it recognize when conversations have moved beyond its capabilities. “In these cases, it signals to a human operator, who can complete the task,” says Google. Speaking to The Verge, Google went further, and said it definitely believes it has a responsibility to inform individuals. Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI. We should make AI sound different from humans for the same reason we put a smelly additive in normally odorless natural gas. Co/2dYmeb70AC.- Travis Korte May 8, 2018 Joanna Bryson, an associate professor at the University of Bath who studies AI ethics, told The Verge that Google has on obvious obligation to disclose this information.

Even though Apple’s developer conference is still a few weeks away, I think it’s safe to say that the demo of Google Duplex at yesterday’s Google I/O keynote will go down as the most impressive of the tech conference season. In Google’s view, computers help you get things done – and save you time – by doing things for you. Zuckerberg, as so often seems to be the case with Facebook, comes across as a somewhat more fervent and definitely more creepy version of Google: not only does Facebook want to do things for you, it wants to do things its chief executive explicitly says would not be done otherwise. The Messianic fervor that seems to have overtaken Zuckerberg in the last year simply means that Facebook has adopted a more extreme version of the same philosophy that guides Google: computers doing things for people. Pichai, in the opening of Google’s keynote, acknowledged that “We feel a deep sense of responsibility to get this right”, but inherent in that statement is the centrality of Google generally and the direct culpability of its managers. There is certainly an argument to be made that these two philosophies arise out of their historical context; it is no accident that Apple and Microsoft, the two “Bicycle of the mind” companies, were founded only a year apart, and for decades had broadly similar business models: sure, Microsoft licensed software, while Apple sold software-differentiated hardware, but both were and are at their core personal computer companies and, by extension, platforms. Aggregators, on the other hand, particularly Google and Facebook, deal in information, and ads are simply another type of information. Still, that doesn’t make the two philosophies any less real: Google and Facebook have always been predicated on doing things for the user, just as Microsoft and Apple have been built on enabling users and developers to make things completely unforeseen.

Google I/O, the search giant’s annual developer conference, kicked off on Tuesday. It began with a two-hour presentation from Google in which top executives took the stage at the Shoreline Amphitheatre in Mountain View, California, to showcase the latest developments in Android, Google Assistant, Google Maps, Google Photos, artificial intelligence, and much more. While there were a ton of announcements, these were the 15 biggest highlights from the Google I/O keynote.

Waymo’s engineers are modeling not only how cars recognize objects in the road, for example, but how human behavior affects how cars should behave. Her role is to ensure our interactions with Waymo’s self-driving cars – as pedestrians, as passengers, as fellow drivers – are wholly positive. A year later, Google’s self-driving car project “Graduated” and became an independent company called Waymo. AI specialists from the Google Brain team regularly collaborate with Dolgov and his fellow engineers at Waymo on methods to improve the accuracy of its self-driving cars. Waymo doesn’t have a monopoly on machines with brains. If Waymo wants its driverless cars to be smart enough to operate in any environment and under any conditions – defined as Level 5 autonomy – it needs a powerful enough infrastructure to scale its self-driving system. The future of AI at Waymo isn’t sentient vehicles The future of AI at Waymo isn’t sentient vehicles. These days, the most challenging driving environments require self-driving cars to make guidance decisions without white lines, Botts Dots, or clear demarcations at the edge of the road. If Waymo can build machine learning models to train its neural nets to drive on streets with unclear markings, than Waymo’s self-driving cars can put the Phoenix suburbs in its rear view and eventually hit the open road..

In terms of how it actually feels to use an Android device day-to-day, it could be the biggest update in years. Later versions of the Android P beta may add an unpause option to the pop-up, but for the first public beta, Google wanted to go to the full extreme to see how users felt about it. Dave Burke, VP of engineering for Android, says that the changes to navigation on Android were made in the name of “Making Android simpler but also more approachable.” It’s a counterintuitive way to describe the new system. There are even existing Android phones that have already been working on something like what Android P does. In Android P, Google is still trying to use its skills in AI and machine learning to make Android smarter, but it’s setting its sights on easier problems. It can work locally or in conjunction with cloud services – and it will operate on both Android and iOS. The Treble with Android updates The specter that hangs over every Android release is that everybody knows it will be months – if not more than a year – before most phones will get it. The Android P public beta is available right away, but it’s coming on seven different third-party Android devices. The most important change to Android might not be what’s in Android P, but instead, the new update foundation laid down last year.

Google just wrapped up its 2018 I/O keynote, and today’s event was jam-packed with news. New Google Assistant voices Google’s virtual assistant is getting some more voice variety. Google Duplex Perhaps the most jaw-dropping moment of today’s keynote came when Sundar Pichai played back a recording of Google Assistant calling a hair salon and making an appointment in a conversation that legitimately sounded like two humans talking to each other. Gmail can now draft emails for you by itself Google is expanding on its helpful Smart Reply feature with a more ambitious idea: Smart Compose. Google Photos gets even smarter editing powers Google Photos is gaining new features like the ability to separate subjects from the background in photos and pop the color or turn the background black and white. Google News – now curated by AI Google’s news app is being overhauled and its editorial focus is now powered largely by AI. The company says “It uses artificial intelligence to analyze all the content published to the web at any moment, and organize all of those articles, videos, and more into storylines. It spots the ones you might be interested in and puts them in your briefing.” News will also deliver “a range of perspectives” to bring you a little bit outside your bubble. Google Lens can copy text from the real world into your phone This is something Google has demonstrated before, but now it sounds like the feature is ready and actually coming to Google Lens. Google Lens still isn’t perfect at identifying precise items of clothing, but Google thinks it can get close enough.

It’s been speculated for some time that Google has been working on some updates for the web-based version of Gmail, and the company is officially moving from tease to truth with its early-morning announcement today. Though the company’s blog post is themed around its G Suite version of Gmail, Google representatives have confirmed that regular ol’ Gmail users will receive the same updates. Here’s a quick look what you’ll be able to play with today if you opt into the new version of Gmail-which you can do by clicking on the Settings gear in the upper-right corner of Gmail and selecting the option to “Try the new Gmail.” You can now tap on super-tiny icons in a brand-new right-hand sidebar to pull up your Google Calendar, write new notes in Keep, type up to-dos in Tasks, or access other Gmail add-ons you’ve installed. That’s because Google is basically integrating this add-on’s functionality directly into Gmail. It’s just like how you’d tag a friend in a post on Google Plus-you still use Google Plus, right? In addition to seeing a few important stats about the person, assuming you’ve already populated that information in Google Contacts, you’ll be able to click icons to email them, schedule an event with them, send them a Hangouts message, or start up a video chat with them. Google isn’t quite ready to launch all of its new Gmail features just yet.

Prior to the rampage, Aghdam posted hundreds of videos on YouTube, holding forth on subjects such as veganism, bodybuilding, and animal rights. PewDiePie, a Swedish comedian and top YouTube personality, made an off-color joke about Nazis. Because YouTube doesn’t look like social media, it’s tougher to recognize how its most horrifying videos spread. In the fall, when Facebook, Twitter, and Google sent lawyers instead of executives to testify before Congress about Russian meddling in the presidential election, Team Google repeatedly stressed that YouTube and its other properties aren’t really social networks and therefore can’t fall prey to the worst of the internet’s trolls, bots, or propagandists. Over the past year, YouTube has made the most sweeping changes since its early days, removing videos it deemed inappropriate and stripping away the advertising from others. In interviews at the San Bruno complex, YouTube executives often resorted to a civic metaphor: YouTube is like a small town that’s grown so large, so fast, that its municipal systems-its zoning laws, courts, and sanitation crews, if you will-have failed to keep pace. Suddenly, YouTube needed a better system to help viewers navigate the deluge, something that would keep them from feeling overwhelmed and wandering back to the comfort of their TVs. In 2010, YouTube hired French programmer Guillaume Chaslot, who soon began developing algorithms that could better match viewers with videos that would keep them watching. More and more, YouTube was starting to convince advertisers it had become the new TV. Kyncl said as much onstage at Madison Square Garden in 2015 during the company’s annual “Brandcast,” at which executives showcase new YouTube programming in front of the world’s top advertisers. “When the nature of the content is that sensitive, and the video is trending, you expect YouTube to be more on top of their game,” says Aditi Rajvanshi, a former YouTube employee who now consults for YouTube stars.