Summary of “Why We May Soon Be Living in Alexa’s World”

In an effort to do so, I recently dived headlong into Alexa’s world.
Late-night shrieks notwithstanding, one day very soon, Alexa or something like it will be everywhere – and computing will be better for it.
At least 50 devices are now powered by Alexa, and more keep coming.
Many don’t include some of Alexa’s key functions – I tested devices that don’t let you set reminders, one of the main reasons to use Alexa.
Alexa on my Echo is the same as Alexa on my TV is the same as Alexa on my Sonos speaker.
Ford – the first of several carmakers to offer Alexa integration in its vehicles – lent me an F-150 pickup outfitted with Alexa.
Google, which is alive to the worry that Alexa will outpace it in the assistant game, is also offering its Google Assistant to other device makers.
Sonos now integrates with Alexa, and is planning to add Google Assistant soon.

The orginal article.

Summary of “The Aggregator Paradox – Stratechery by Ben Thompson”

The implication of Facebook and Google effectively taking all digital ad growth is that publishers increasingly can’t breathe, and while that is neither company’s responsibility on an individual publisher basis, it is a problem in aggregate, as Instant Articles is demonstrating.
A core idea of Aggregation Theory is that suppliers – in the case of Google and Facebook, that is publishers – commoditize themselves to fit into the modular framework that is their only route to end users owned by the aggregator.
For all of the criticism Facebook has received for its approach to publishers generally and around Instant Articles specifically, it seems likely that the company’s biggest mistake was that it did not leverage its power in the way that Google was more than willing to.
Beginning Thursday, Google Chrome, the world’s most popular web browser, will begin flagging advertising formats that fail to meet standards adopted by the Coalition for Better Ads, a group of advertising, tech and publishing companies, including Google, a unit of Alphabet Inc. Sites with unacceptable ad formats-annoying ads like pop-ups, auto-playing video ads with sound and flashing animated ads-will receive a warning that they’re in violation of the standards.
Nothing quite captures the relationship between suppliers and their aggregator like the expression of optimism that one of the companies actually destroying the viability of digital advertising for publishers will actually save it; then again, that is why Google’s carrots, while perhaps less effective than its sticks, are critical to making an ecosystem work.
There is no better example than Google’s actions with AMP and Chrome ad-blocking: Google is quite explicitly dictating exactly how it is its suppliers will access its customers, and it is hard to argue that the experience is not significantly better because of it.
At the same time, what Google is doing seems nakedly uncompetitive – thus the paradox.
Yes, consumers are giving up their data, but even there Google has the user experience advantage: consumer data is far safer with Google than it is with random third party ad networks desperate to make their quarterly numbers.

The orginal article.

Summary of “The Case Against Google”

In other words, it’s very likely you love Google, or are at least fond of Google, or hardly think about Google, the same way you hardly think about water systems or traffic lights or any of the other things you rely on every day.
Shivaun would sit at her computer, exhausted, Googling phrase after phrase – How do you lift a Google website penalty? Who at Google reviews mistakes? Google and deindexed and phone number and help – hoping that some magic combination of words might yield a new solution.
Skyhook’s accuracy “Is better than ours,” one Google manager speculated in an internal email later revealed in a lawsuit filed by Skyhook against Google.
Skyhook sued Google, and though one suit was dismissed, Google ended up paying $90 million to settle a patent-infringement claim.
Yelp complained – to Google and later to the F.T.C. – but Google said the only alternative was for Yelp to remove its content from Google altogether, according to documents filed with federal regulators.
In 2013, Google adjusted how it displayed images so that rather than directing people to Getty’s website, users could easily see and download Getty’s high-definition images from Google itself.
TradeComet.com, which operated a vertical-search engine for finding business products, initially prospered by buying ads on Google, but as the site grew, Google “Raised my prices by 10,000 percent, which strangled our business virtually overnight,” the company’s C.E.O. at the time, Dan Savage, said when he filed an antitrust lawsuit in 2009.
Whereas a decade earlier someone searching for steakhouses would have seen a long list of websites, now the most noticeable results pointed to Google’s own listings, including Google maps, Google local search or advertisers paying Google.

The orginal article.

Summary of “Google Flights will now predict airline delays – before the airlines do”

Explains Google, the combination of data and A.I. technologies means it can predict some delays in advance of any sort of official confirmation.
Google says that it won’t actually flag these in the app until it’s at least 80 percent confident in the prediction, though.
You can track the status of your flight by searching for your flight number or the airline and flight route, notes Google.
Google Flights will now display the restrictions associated with these fares – like restrictions on using overhead space or the ability to select a seat, as well as the fare’s additional baggage fees.
It’s initially doing so for American, Delta and United flights worldwide.
These changes come only a month after Google Flights added price tracking and deals to Google Flights as well as hotel search features for web searchers.
The additions seem especially targeted toward today’s travel startups and businesses, like Hopper which had just added hotel search, and uses big data to analyze airline prices and other factors; or TripIt, a competitor of sorts to Google’s own travel app Google Trips, which most recently introduced security checkpoint wait times.
The features are also a real-world demo of Google’s machine learning and big data capabilities, especially in the case of predicting flight delays.

The orginal article.

Summary of “Smart homes and vegetable peelers”

If you do need to interact deliberately, is voice or a screen the right model – and does that mean a screen on the device itself or just your phone? An oven that lets you tell it what you’re cooking might want a screen on the device, but also be accessed from your phone to check progress, and also talk to Alexa: ‘pre-heat the oven to 350 degrees please, and turn it off 30 minutes after I put the dish in’.
You can keep a garage door opener for 20 years or buy a new smart one now, but no-one will replace a two-year-old fridge just to get a smart one.
Many of these device categories will be commodity products using commodity components – some categories will have 50 companies making near-identical devices.
Is there a network effect? A cloud service? Something with the use of aggregated data across all the devices? Or, do you have a route-to-market advantage? If not, then your whole category will probably go to the incumbents – generic ‘consumer electronics’ devices will go to Shenzhen and washing machines will go to the washing machine companies, where smart becomes just another high-end feature.
Self-evidently, Amazon and Google make little to no money from selling cheap smart speakers per se, nor from the sale of smart devices with their tech embedded.
Rather, controlling the smart home is a use-case to get you to buy the device, and making the device into the hub of a smart home makes it sticky, but the value of the device to Google or Apple is something else.
The point is not really sales of the device, nor the smart home, but the leverage to their ecosystems, in some way, that it provides.
Even if voice and smart speakers are very, very important, that doesn’t necessarily mean that Alexa or anyone else will run away with the space.

The orginal article.

Summary of “Everyone Hates Setting Goals. Here’s How Google Makes It Easier for Its Employees”

On Google’s re:Work site, a resource that shares the company’s perspective on people operations, Google explains the concept.
Objectives are the “Big picture.” They answer the questions “Where do we want to go?” and “What do we want to do?” Also, objectives are where Google encourages its employees to stretch themselves, be ambitious, and embrace uncertainty.
If you don’t get nervous or feel a little uncomfortable after setting a goal, then you haven’t reached high enough.
Because they are designed to stretch employees, Google recommends only three to five objectives total.
Anything more, and Google knows that it runs the risk of spreading employees too thin.
What Google warns against are goals that don’t “Push for new achievements.” The examples they share are: “Keep hiring,” “Maintain market position,” or “Continue doing X.”.
Google includes everyone’s goals on their internal directory.
In a YouTube video that explains how Google uses OKRs, Rick Klau provided some additional clarity on the process: “Personal OKRs define what the person is working on. Team OKRs define priorities for the team, not just a collection of individual OKRs. Company OKRs are big picture, a top-level focus for the entire company.”

The orginal article.

Summary of “Google is using 46 billion data points to predict a hospital patient’s future”

Some of Google’s top AI researchers are trying to predict your medical outcome as soon as you’re admitted to the hospital.
To conduct the study, Google obtained de-identified data of 216,221 adults, with more than 46 billion data points between them.
The data span 11 combined years at two hospitals, University of California San Francisco Medical Center and University of Chicago Medicine.
The biggest challenge for AI researchers looking to train their algorithms on electronic health records, the source of the data, is the vast, disparate, and poorly-labelled pieces of data contained in a patient’s file, the researchers write.
In addition to data points from tests, written notes have traditionally been difficult for automated systems to comprehend; each doctor and nurse writes differently and can take different styles of notes.
To compensate for this, the Google approach relies on three complex deep neural networks that learn from all the data and work out which bits are most impactful to final outcomes.
After analyzing thousands of patients, the system identified which words and events associated closest with outcomes, and learned to pay less attention to what it determined to be extraneous data.
Google heavy-hitters like Quoc Le, credited with creating recurrent neural networks used for predictions based on time, and Jeff Dean, a legend at the company for his work on Google’s server infrastructure, are both on the paper, as well as Greg Corrado, a director at the company involved in high-profile projects like translation and its Smart Reply feature.

The orginal article.

Summary of “The Shallowness of Google Translate”

Frank would write a message in English, then run it through Google Translate to produce a new text in Danish; conversely, she would write a message in Danish, then let Google Translate anglicize it.
After reading about how the old idea of artificial neural networks, recently adopted by a branch of Google called Google Brain, and now enhanced by “Deep learning,” has resulted in a new kind of software that has allegedly revolutionized machine translation, I decided I had to check out the latest incarnation of Google Translate.
I’ll focus first on “The ‘odd.'” This corresponds to the German “Die ‘Ungeraden,'” which here means “Politically undesirable people.” Google Translate had a reason-a very simple statistical reason-for choosing the word “Odd.” Namely, in its huge bilingual database, the word “Ungerade” was almost always translated as “Odd.” Although the engine didn’t realize why this was the case, I can tell you why.
It’s hard for a human, with a lifetime of experience and understanding and of using words in a meaningful way, to realize how devoid of content all the words thrown onto the screen by Google Translate are.
In order to make sure that my readers steer clear of this trap, let me quote some phrases from a few paragraphs up-namely, “Google Translate did not understand,” “It did not realize,” and “Google Translate didn’t have the foggiest idea.” Paradoxically, these phrases, despite harping on the lack of understanding, almost suggest that Google Translate might at least sometimes be capable of understanding what a word or a phrase or a sentence means, or is about.
It’s too bad Google Translate couldn’t avail itself of the services of Google Search as I did, isn’t it? But then again, Google Translate can’t understand web pages, although it can translate them in the twinkling of an eye.
A whole paragraph or two may come out superbly, giving the illusion that Google Translate knows what it is doing, understands what it is “Reading.” In such cases, Google Translate seems truly impressive-almost human! Praise is certainly due to its creators and their collective hard work.
Despite my negativism, Google Translate offers a service many people value highly: It effects quick-and-dirty conversions of meaningful passages written in language A into not necessarily meaningful strings of words in language B. As long as the text in language B is somewhat comprehensible, many people feel perfectly satisfied with the end product.

The orginal article.

Summary of “The Google Arts & Culture App and the Rise of the “Coded Gaze””

In December, Google introduced a feature to its Arts & Culture app that allows you to take a selfie with your phone and use it to generate results from the company’s image database for your own museum doppelgänger.
Last week, as more and more users discovered the feature, Arts & Culture briefly became the most downloaded app in the iTunes store.
When Amit Sood, the president of Google Arts & Culture, discussed the app with Bloomberg Technology, on Monday, he spoke of it in downright Bergerian terms.
As it happened, the Arts & Culture app did better than I’d expected.
In one way, the art selfie app might be seen as a fulfillment of Berger’s effort to demystify the art of the past.
In Berger’s story this flattening represents the people prying away power from “a cultural hierarchy of relic specialists.” Google Arts & Culture is overseen by a new cadre of specialists: the programmers and technology executives responsible for the coded gaze.
Today the Google Cultural Institute, which released the Arts & Culture app, boasts more than forty-five thousand art works scanned in partnership with over sixty museums.
What does it mean that our cultural history, like everything else, is increasingly under the watchful eye of a giant corporation whose business model rests on data mining? One dystopian possibility offered by critics in the wake of the Google selfie app was that Google was using all of the millions of unflattering photos to train its algorithms.

The orginal article.

Summary of “People love Google’s new feature that matches your selfie to a famous painting”

Do you look like the Mona Lisa? Or maybe more of an American Gothic?
Social media is being flooded with Google’s opinions, at least, as part of a new feature that compares a user’s selfie with the company’s catalog of historical artworks, looking for the just-perfect doppelganger.
The update to the Google Arts & Culture App has catapulted it to the most-downloaded free app on the App Store.
It claimed the No. 1 spot in the U.S. on Saturday, according to the app metrics site AppAnnie.
How does Google do it? The app uses computer-vision tech to examine what is similar about your face to the thousands of pieces of art that are shared with Google by museums and other institutions.
Google says this new feature is merely experimental – the app has been around since 2016.

The orginal article.