Summary of “Blind Spots in AI Just Might Help Protect Your Privacy”

Just a few small tweaks to an image or a few additions of decoy data to a database can fool a system into coming to entirely wrong conclusions.
Gong points to Facebook’s Cambridge Analytica incident as exactly the sort of privacy invasion he hopes to prevent: The data science firm paid thousands of Facebook users a few dollars each for answers to political and personal questions and then linked those answers with their public Facebook data to create a set of “Training data.” When the firm then trained a machine-learning engine with that dataset, the resulting model could purportedly predict private political persuasions based only on public Facebook data.
After tweaking the data a few different ways, they found that by adding just three fake app ratings, chosen to statistically point to an incorrect city-or taking revealing ratings away-that small amount of noise could reduce the accuracy of their engine’s prediction back to no better than a random guess.
The cat-and-mouse game of predicting and protecting private user data, Gong admits, doesn’t end there.
If the machine-learning “Attacker” is aware that adversarial examples may be protecting a data set from analysis, he or she can use what’s known as “Adversarial training”-generating their own adversarial examples to include in a training data set so that the resulting machine-learning engine is far harder to fool.
Another research group has experimented with a form of adversarial example data protection that’s intended to cut short that cat-and-mouse game.
Researchers at the Rochester Institute of Technology and the University of Texas at Arlington looked at how adversarial examples could prevent a potential privacy leak in tools like VPNs and the anonymity software Tor, designed to hide the source and destination of online traffic.
Attackers who can gain access to encrypted web browsing data in transit can in some cases use machine learning to spot patterns in the scrambled traffic that allows a snoop to predict which website-or even which specific page-a person is visiting.

The orginal article.

Summary of “The Connected Car of the Future Could Kill Off the Local Auto Repair Shop”

“There is almost no independent repair shop that would think of putting its hands on Tesla, except for maybe to change the brake pads,” explains one industry expert.
When Jim Dykstra became part owner of his family’s auto repair service business in 1994, mechanics diagnosed car problems by looking under the hood.
Technicians often start a repair not by poking around an engine, but by plugging a computer tool into what’s known as an “On-board diagnostics. The first solution places more control of data in the hands of a manufacturer. If a repair shop, a parts dealer, a tech school, or an insurance company wanted access to the data coming out of a car, they could license it from a manufacturer such as Ford, General Motors, or Toyota, or access it under some agreement. BMW has already experimented with this type of system. Called”BMW CarData” it allows customers to share their cars’ data with third parties, such as insurance companies that want to keep tabs on their driving habits or an auto repair shop.
It could, aftermarket industry advocates worry, be transmitted wirelessly to the car manufacturer, where the car manufacturer could control access to it.
A Long Conversation Car manufacturers have taken different stances on how much data they should be expected to share with repair shops that aren’t part of their dealerships.
“The net of it is, we don’t know,” says Behzad Rassuli, a senior vice president at the Auto Care Association, a trade association that counts 500,000 independent manufacturers, distributors, parts stores, and repair shops as members.
Last December, a group of 10 auto industry organizations that represent both car manufacturers and aftermarket players wrote a letter to SAE International, a standards setting agency, to create a working group around the type of information sharing system that aftermarket repair shops favor.
Some industry experts aren’t optimistic that whatever agreement, or lack of agreement, manufacturers reach with the aftermarket will put repair shops on equal footing with car dealerships.

The orginal article.

Summary of “Can AI Keep You Healthy?”

The goal: continuous monitoring of your health and suggestions of adjustments you might make in your diet and behavior before you slip from being healthy into the early stages of an illness.
ICX is part of a new wave of companies that figure they can find something meaningful in the data and enable medicine to stop merely reacting to an illness you have; these companies want to keep you healthy at a fraction of the cost.
As CEO of ICX, Wang has raised $600 million in funding for the effort, a remarkable amount for a project offering high-tech tests for healthy people.
“It turns out you also need to know about proteins, and metabolites, and all the rest,” says Wang.Soon after his departure from BGI, Wang formed ICX, knowing he would do something with AI and health.
PatientsLikeMe, which runs a service where thousands of members discuss their various chronic diseases in online forums and provide metrics about their health and the progression of their disease, had already shown the value of careful health tracking by individuals.
“In certain niches, AI is here and has been for years,” says Marty Kohn, a physician and the former chief medical scientist at IBM, who helped develop IBM Watson Health.
“But the claims for AI and health care are very overblown.” Most companies, he suggests, “Don’t do real science.”
One wonders if millions of healthy people will be as obsessed as Jun Wang is with collecting so much data on themselves.

The orginal article.

Summary of “This SimCity-Like Tool Lets Urban Planners See the Potential Impact of Their Ideas”

One thing that’s helpful-but often lacking-in arguments across the country about major urban policy like this is specific numbers about how the change might affect the city in the future.
In Salt Lake City, in the late 1990s, they worked with the city to envision how it could meet a new demand for housing.
The work in Salt Lake City took two years, a massive budget, and “An army of consultants.” They started to play with the idea of creating software that could make the process cheaper, quicker, and more accessible.
“It sits at the nexus of software technology, machine learning, and urban infrastructure scenario planning to drive sustainable and equitable outcomes for cities everywhere.”
As new data becomes available from the growing range of sensors used by city governments or from startups like Aclima, which is using air quality sensors attached to Google StreetView cars, that can be added to the tool.
“The data is getting to the point now where one of the biggest challenges for people engaged in city building is filtering through it all,” Calthorpe says.
“In China alone, they’re going to be building cities for another 300 million people in the next 20 years,” says Calthorpe.
“That’s basically building the urban environment of the United States. And they’re going to do that in 20 years instead of 200 years. So getting cities right is really at the crux of the well-being of mankind.”

The orginal article.

Summary of “Credit card privacy matters: Apple Card vs. Chase Amazon Prime Rewards Visa”

With my banana test – two bananas, one purchased with the popular Chase Amazon Prime Rewards Visa and the other with Apple’s Mastercard – I hoped to uncover the secret life of my credit card data.
Some data even got fed to retail giant Amazon because it co-branded my card.
Chase would not tell me the specific data it shared from my card or the companies it shared it with.
Of course Amazon receives data when you buy products on Amazon with its card.
As a co-branded partner, Apple says it can’t access data about your transactions outside Apple.
The networks, whose main business is connecting banks, have side gigs in aggregating purchases and selling them as “Data insights.” Visa said it allows clients to see data on populations as small as 50 people, often tied to groups in Zip codes.
Bloomberg has reported that data from millions of Mastercards – now likely including Apple Cards – ends up helping Google track retail sales.
By law, credit card companies give us ways to opt out of some of their data sharing, though sharing with other financial institutions and joint-marketing partners is usually exempt.

The orginal article.

Summary of “Meet the US’s spy system of the future”

A product of the National Reconnaissance Office, Sentient is an omnivorous analysis tool, capable of devouring data of all sorts, making sense of the past and present, anticipating the future, and pointing satellites toward what it determines will be the most interesting parts of that future.
It’s not all dystopian: the documents released by the NRO also imply that Sentient can make satellites more efficient and productive.
Of the more than 150 US military satellites, the NRO operates around 50.
One of these, BlackSky, uses those satellites to feed into a system that’s essentially Sentient’s unclassified doppelgänger.
In the ideal version of that process, an automated system sucks in all sorts of data, synthesizes it into something sensible, cues the satellite symphony, reincorporates the satellites’ data back into the analysis loop, comes to a smarter conclusion, points the satellites or other sensors again, and repeats the entire process.
Here’s where Sentient reenters the picture: All the images from the NRO, the military, and these commercial satellite firms, combined with other geospatial intelligence – anything that has a time tag and a location tag – create a vast amount of information that’s far more than a literal army of people could comb through.
It could perhaps gather data on how often they fly and where, or even look at news to find out whether there’s any agitation or action around Aleysk: now the system knows exactly where they should point their real-time satellites to gather the information that their client needs.
Spy satellites, like the ones used by the NRO, are primarily meant to focus on the world beyond the United States’ borders.

The orginal article.

Summary of “The Selfish Dataome”

That burden challenges us to ask if we are manufacturing and protecting our dataome for our benefit alone, or, like the selfish gene, because the data makes us do this because that’s what ensures its propagation into the future.
Shakespeare, to be fair, contributed barely a drop to a vast ocean of data that is both ethereal yet actually extremely tangible in its effects upon us.
Data like these have outlived generation after generation of humans.
As time has gone by our production of data has accelerated.
In Perspective: The human genome fits on about two CDs. The human species produces about 20,000 CDs worth of data a second.
On the face of things, it seems pretty obvious that our capacity to carry so much data with us through time is a critical part of our success at spreading across the planet.
The proliferation of data of seemingly very low utility could actually be a sign of worrying dysfunction in our dataome.
Either through data credit schemes akin to domestic solar power feeding back to the grid, or making the loss of data a positive feature.

The orginal article.

Summary of “The First Thing Great Decision Makers Do”

As a statistician, I appreciate the quote by applied statistics pioneer W. Edwards Deming, “In God we trust. All others bring data.” But as a social scientist, I’m compelled to warn you that many decision-makers chase data with too much zeal, running from ignorance but never improving their decisions.
Is there a way to land in the sweet spot? There is, and it starts with one simple decision-making habit: Commit to your default decision up front.
The key to decision-making is framing the decision context before you seek data – a skill that unfortunately is not usually covered in data science courses.
Many decision-makers think they’re being data-driven when they look at a number, form an opinion, and execute their decision.
There were numbers near that decision somewhere, but those numbers didn’t drive the decision.
By leaving the decision criteria open, you’re free to interact with the data selectively to confirm the choice you’ve already made in your heart of hearts.
The first part of that process is determining what you’re planning to do in the absence of further data.
You ask yourself, “If I see no additional data beyond what I’ve already seen, what will I do?” Answering this takes strength of character  –  you can’t punt it to the data.

The orginal article.

Summary of “A new book says married women are miserable. Don’t believe it.”

Women should be wary of marriage – because while married women say they’re happy, they’re lying.
According to behavioral scientist Paul Dolan, promoting his recently released book Happy Every After, they’ll be much happier if they steer clear of marriage and children entirely.
Dolan had misinterpreted one of the categories in the survey, “Spouse absent,” which refers to married people whose partner is no longer living in their household, as meaning the spouse stepped out of the room.
An older article he cited earlier claims that unmarried women have 50% higher mortality rates than married women.
In May, author Naomi Wolf learned of a serious mistake in a live, on-air interview about her forthcoming book Outrages: Sex, Censorship and the Criminalization of Love.
Earlier this year, former New York Times editor Jill Abramson’s book Merchants of Truth was discovered to contain passages copied from other authors, and alleged to be full of simple factual errors as well.
Around the same time, I noticed that a statistic in the New York Times Magazine and in Clive Thompson’s upcoming book Coders was drawn from a study that doesn’t seem to really exist.
In response to the embarrassing retractions and failed replications associated with the replication crisis, more researchers are publishing their data and encouraging their colleagues to publish their data.

The orginal article.

Summary of “Why Technology Favors Tyranny”

At least for a few more decades, human intelligence is likely to far exceed computer intelligence in numerous fields.
Many of these new jobs will probably depend on cooperation rather than competition between humans and AI. Human-AI teams will likely prove superior not just to humans, but also to computers working on their own.
For several years after IBM’s computer Deep Blue defeated Garry Kasparov in 1997, human chess players still flourished; AI was used to train human prodigies, and teams composed of humans plus computers proved superior to computers playing alone.
Since AlphaZero had learned nothing from any human, many of its winning moves and strategies seemed unconventional to the human eye.
These potential advantages of connectivity and updatability are so huge that at least in some lines of work, it might make sense to replace all humans with computers, even if individually some humans still do a better job than the machines.
IV. The Transfer of Authority to Machines Even if some societies remain ostensibly democratic, the increasing efficiency of algorithms will still shift more and more authority from individual humans to networked machines.
If we invest too much in AI and too little in developing the human mind, the very sophisticated artificial intelligence of computers might serve only to empower the natural stupidity of humans, and to nurture our worst impulses, among them greed and hatred.
We are now creating tame humans who produce enormous amounts of data and function as efficient chips in a huge data-processing mechanism, but they hardly maximize their human potential.

The orginal article.