Summary of “How GDPR Will Transform Digital Marketing”

This month will see the enforcement of a sweeping new set of regulations that could change the face of digital marketing: the European Union’s General Data Protection Regulation, or GDPR. To protect consumers’ privacy and give them greater control over how their data is collected and used, GDPR requires marketers to secure explicit permission for data-use activities within the EU. With new and substantial constraints on what had been largely unregulated data-collection practices, marketers will have to find ways to target digital ads, depending less on hoovering up quantities of behavioral data.
While digital marketers are aware of the strict new regulatory regime, seemingly few have taken active steps to address how it will impact their day-to-day operations.
GDPR will force marketers to relinquish much of their dependence on behavioral data collection.
Does this constitute an active and genuine choice? Does it indicate that the user is willing to have her personal data harvested across the digital and physical worlds, on- and off-platform, and have that data used to create a behavioral profile for digital marketing purposes? Almost certifiably not.
Other components of GDPR that will make life more difficult and increase operational uncertainty for digital marketers include the ban on automated decision-making in the absence of the individual’s meaningful consent; the new rights afforded to individuals to access, rectify, and erase data about them held by corporations; the prohibition on processing of data pertaining to special protected categories as identified in the regulations; and the stipulation that data collectors must demonstrate compliance with the regulations as a general matter.
Atop all of these measures is the additional requirement that service providers like Facebook and Google make the data they hold on individuals portable.
What will take the place of behavioral data collection to power ad-targeting? How will digital marketers from Dior to the NBA to political campaigns channel the right marketing messages to the right eyeballs at the right times?
Even within Europe such data collection will still be possible, but most likely with aggressively enforced transparency and consumer oversight.

The orginal article.

Summary of “How the Math Men Overthrew the Mad Men”

They’ve now been eclipsed by Math Men-the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence.
To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “Walled garden,” their unwillingness to share data on their users.
This preoccupation with Big Data is also revealed by the trend in the advertising-agency business to have the media agency, not the creative Mad Men team, occupy the prime seat in pitches to clients, because it’s the media agency that harvests the data to help advertising clients better aim at potential consumers.
Prowling his London office in jeans, Keith Weed, who oversees marketing and communications for Unilever, one of the world’s largest advertisers, described how mobile phones have elevated data as a marketing tool.
Suddenly, governments in the U.S. are almost as alive to privacy dangers as those in Western Europe, confronting Facebook by asking how the political-data company Cambridge Analytica, employed by Donald Trump’s Presidential campaign, was able to snatch personal data from eighty-seven million individual Facebook profiles.
Advertiser confidence in Facebook was further jolted later in 2016, when it was revealed that the Math Men at Facebook overestimated the average time viewers spent watching video by up to eighty per cent.
In 2017, Math Men took another beating when news broke that Google’s YouTube and Facebook’s machines were inserting friendly ads on unfriendly platforms, including racist sites and porn sites.
The magazine editorialized, in May, 2017, that governments must better police the five digital giants-Facebook, Google, Amazon, Apple, and Microsoft-because data were “The oil of the digital era”: “Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the ‘data economy.'” Inevitably, an abundance of data alters the nature of competition, allowing companies to benefit from network effects, with users multiplying and companies amassing wealth to swallow potential competitors.

The orginal article.

Summary of “Credit score ratings: Is artificial intelligence scoring more fair?”

Credit in China is now in the hands of a company called Alipay, which uses thousands of consumer data points-including what they purchase, what type of phone they use, what augmented reality games they play, and their friends on social media-to determine a credit score.
The decisions made by algorithmic credit scoring applications are not only said to be more accurate in predicting risk than traditional scoring methods; its champions argue they are also fairer because the algorithm is unswayed by the racial, gender, and socioeconomic biases that have skewed access to credit in the past.
Of course, algorithmic credit scoring isn’t confined to emerging credit markets.
‘ As Schulman’s Money2020 speech suggests, algorithmic credit scoring is fueled by a desire to capitalize on the world’s ‘unbanked,’ drawing in billions of customers who, for lack of a traditional financial history, have thus far been excluded.
Algorithmic credit scores might seem futuristic, but these practices do have roots in credit scoring practices of yore.
Early credit agencies, for example, hired human reporters to dig into their customers’ credit histories.
One credit reporter from Buffalo, New York noted that “Prudence in large transactions with all Jews should be used,” while a reporter in Georgia described a liquor store he was profiling as “a low Negro shop.” Similarly, the Retailer Credit Company, founded in 1899 made use of information gathered by Welcome Wagon representatives to collate files on millions of Americans for the next 60 years.
Maybe then we can really see if these systems are giving credit where credit is due.

The orginal article.

Summary of “This Man Is Building an Armada of Saildrones to Conquer the Ocean”

The robotic vessels come from an Alameda startup called Saildrone Inc. Backed by $90 million in venture capital, Saildrone is a big bet on the market for information about the ocean.
During winter, Jenkins traveled to Montana or Canada and ran his yachts on the ice.
In 2009, so did Jenkins, eager to figure out what came after the land yacht.
For two years, Jenkins and a couple of boat building pals rented a little slice of a warehouse for what amounted to a very niche consulting business.
On launch day for the Shark Cafe trackers, Jenkins, driving a forklift with long, extendable arms, lowers two drones into the bay.
From his iPhone, Jenkins can monitor any saildrone around the world.
Building 1,000, the number Jenkins figures would be enough for round-the-clock assessments of the oceans, could cost upwards of $100 million, though that’s still cheaper than a single NOAA research ship.
Like Jenkins, de Halleux is a sailor, and he’s been coaxed into land yachting by his business partner.

The orginal article.

Summary of “How the Enlightenment Ends”

What would be the impact on history of self-learning machines-machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?
The Enlightenment sought to submit traditional verities to a liberated, analytic human reason.
What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data.
Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?
If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do.
If its computational power continues to compound rapidly, AI may soon be able to optimize situations in ways that are at least marginally different, and probably significantly different, from how humans would optimize them.
The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?
How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?

The orginal article.

Summary of “Amazon’s control over ebook sales data should upset everyone in publishing”

Many of their authors are writing and publishing books, and finding massive audiences, without being actively tracked by the publishing industry.
According to one estimate, last year 2,500 self-published authors made at least $50,000 in book sales across self-publishing platforms, before the platforms’ cuts.
The information asymmetry between Amazon and the rest of the book industry-publishers, brick-and-mortar stores, industry analysts, aspiring writers-means that only the Seattle company has deeply detailed information, down to the page, on what people want to read. So an industry that’s never been particularly data-savvy increasingly works in the dark: Authors lose negotiating power, and publishers lose the ability to compete on pricing or even, on a basic level, to understand what’s selling.
Amazon doesn’t report its ebook sales to any of the major industry data sources, and it doesn’t give authors more than their own personal slice of data.
A spokesperson from Amazon writes by email that “Hundreds of thousands of authors self-publish their books today with Kindle Direct Publishing,” but declined to provide a number, or any sales data.
“NPD PubTrack Digital tracks ebook sales but because it is a publisher data-share model, the data does not include self-published ebooks,” writes NPD’s Allison Risbridger by email.
Bookstat extrapolates sales data from book rankings and sales history, provided by authors, and estimates sales per author and book throughout the day, with a self-reported margin of error of 5%. Bookstat estimates that in 2017, there were half a million self-published authors who sold at least one book, and a total of 240 million self-published ebook units sold.
As the founder, who still asks to remain anonymous, notes, “There’s really no way to wrap your arms around how many authors there are, including the ones who are not selling, including the ones who are out of print on the traditional publishing side.” By his estimate, self-published books in the US were worth $875 million last year, about $700 million of which was ebooks.

The orginal article.

Summary of “Find out the environmental impact of your Google searches and internet usage”

“Data is very polluting,” says Joana Moll, an artist-researcher whose work investigates the physicality of the internet.
“Almost nobody recalls that the internet is made up of interconnected physical infrastructures which consume natural resources,” Moll writes as an introduction to the project.
CO2GLE uses 2015 internet traffic data, Moll says, and is based on the assumption that Google.com “Processes an approximate average of 47,000 requests every second, which represents an estimated amount of 500 kg of CO2 emissions per second.” That would be about 0.01 kg per request.
One estimate from British environmental consultancy Carbonfootprint puts it between 1g and 10g of CO2 per Google search.
Speaking at the Internet Media Age conference in Barcelona last week, Moll showed another visualization, which she calls “DEFOOOOOOOOOOOOOOOOOOOOOREST,” to drive home the point.
Moll’s research focused on Google because of its scale, but other websites also contribute to the internet’s carbon footprint.
“What I’m really trying to do is to trigger thoughts and reflections on the materiality of data and materiality of our direct usage of the internet,” Moll says.
“To calculate the CO2 of the internet is really complicated. It’s the biggest infrastructure ever been built by humanity and it involves too many actors. numbers that can serve to raise awareness.”

The orginal article.

Summary of “I am a data factory”

Am I a data mine, or am I a data factory? Is data extracted from me, or is data produced by me? Both metaphors are ugly, but the distinction between them is crucial.
If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership.
Who owns me, and what happens to the economic value of the data extracted from me? Should I be my own owner – the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.
Thinking of the platform companies as being in the extraction business, with personal data being analogous to a natural resource like iron or petroleum, brings a neatness and clarity to discussions of a new and complicated type of company.
We can use the recent data controversies to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state will be deployed to recognise, create, and foster the creation of social rights to data.
When I upload a photo, I produce not only behavioral data but data that is itself a product.
I am, in other words, much more like a data factory than a data mine.
Beyond control of my data, the companies seek control of my actions, which to them are production processes, in order to optimize the efficiency, quality, and value of my data output.

The orginal article.

Summary of “Cambridge Analytica: how did it turn clicks into votes?”

How do 87m records scraped from Facebook become an advertising campaign that could help swing an election? What does gathering that much data actually involve? And what does that data tell us about ourselves?
For those 87 million people probably wondering what was actually done with their data, I went back to Christopher Wylie, the ex-Cambridge Analytica employee who blew the whistle on the company’s problematic operations in the Observer.
According to Wylie, all you need to know is a little bit about data science, a little bit about bored rich women, and a little bit about human psychology…. Step one, he says, over the phone as he scrambles to catch a train: “When you’re building an algorithm, you first need to create a training set.” That is: no matter what you want to use fancy data science to discover, you first need to gather the old-fashioned way.
The “Training set” refers to that data in its entirety: the Facebook likes, the personality tests, and everything else you want to learn from.
Facebook data, which lies at the heart of the Cambridge Analytica story, is a fairly plentiful resource in the data science world – and certainly was back in 2014, when Wylie first started working in this area.
In order to be paid for their survey, users were required to log in to the site, and approve access to the survey app developed by Dr Aleksandr Kogan, the Cambridge University academic whose research into personality profiling using Facebook likes provided the perfect access for the Robert Mercer-funded Cambridge Analytica to quickly get in on the field.
Where the psychological profile is the target variable, the Facebook data is the “Feature set”: the information a data scientist has on everyone else, which they need to use in order to accurately predict the features they really want to know.
How Cambridge Analytica turned Facebook likes into votes.

The orginal article.

Summary of “You Can’t Opt Out Of Sharing Your Data, Even If You Didn’t Opt In”

We’re used to thinking about privacy breaches as what happens when we give data about ourselves to a third party, and that data is then stolen from or abused by that third party.
“One of the fascinating things we’ve now walked ourselves into is that companies are valued by the market on the basis of how much user data they have,” said Daniel Kahn Gillmor, senior staff technologist with the ACLU’s Speech, Privacy and Technology Project.
The privacy of the commons is how the 270,000 Facebook users who actually downloaded the “Thisisyourdigitallife” app turned into as many as 87 million users whose data ended up in the hands of a political marketing firm.
Even if you do your searches from a specialized browser, tape over all your webcams and monitor your privacy settings without fail, your personal data has probably still been collected, stored and used in ways you didn’t intend – and don’t even know about.
The information collected every time they scan that loyalty card adds up to something like a medical history, which could later be sold to data brokers or combined with data bought from brokers to paint a fuller picture of a person who never consented to any of this.
The privacy of the commons means that, in some cases, your data is collected in ways you cannot reasonably prevent, no matter how carefully you or anyone you know behaves.
Our digital commons is set up to encourage companies and governments to violate your privacy.
Almost all of our privacy law and policy is framed around the idea of privacy as a personal choice, Cohen said.

The orginal article.