Summary of “The next wave of computing”

In this post, I’ll present my prediction for what the next major wave of computing will be.
The mainframes of the 1960s and 1970s had a “Centralized” computing model where a single mainframe would serve an entire office building, and “Dumb” terminals would send compute-jobs to the mainframe.
The next wave of computing is going to be a massive shift away from cloud computing.
There are two major problems with cloud computing: users don’t own their own data, and remote servers are security holes.
With a move away from cloud computing, decentralized systems like Bitcoin give explicit control of digital assets to end-users and remove the need to trust any third-party servers and infrastructure.
How will the tech industry change with decentralized computing?Decentralized computing is a mega trend that is not getting nearly as much attention as it deserves; it will likely have an economic, social, and political impact larger than the desktop and cloud revolutions.
Crypto tokens for protocols will become as ubiquitous as software licenses and terms-of-service agreements for cloud services: to use the software in decentralized computing you’ll need the respective token.
I, for one, am extremely excited about this next wave of computing.

The orginal article.

Summary of “China’s Plan for World Domination in AI Isn’t So Crazy After All”

Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. “In China, the population is huge, so it’s much easier to collect the data for whatever use-scenarios you need,” he said.
China just enshrined the pursuit of AI into a kind of national technology constitution.
“Data access has always been easier in China, but now people in government, organizations and companies have recognized the value of data,” said Jiebo Luo, a computer science professor at the University of Rochester who has researched China.
Advanced AI operations, like DeepMind, often rely on “Simulated” data, co-founder Demis Hassabis explained during a trip to China in May. DeepMind has used Atari video games to train its systems.
“Sure, there might be data sets you could get access to in China that you couldn’t in the U.S.,” said Oren Etzioni, director of the Allen Institute for Artificial Intelligence.
“China currently has a talent shortage when it comes to top tier AI experts,” said Connie Chan, a partner at venture capital firm Andreessen Horowitz.
The firm recruited Qi Lu, one of Microsoft’s top executives, to return to China to lead the search giant’s push into AI. He touted the technology’s potential for enhancing China’s “National strength” and cited a figure that nearly half of the bountiful academic research on the subject globally has ethnically Chinese authors, using the Mandarin term “Huaren” 华人- a term for ethnic Chinese that echoes government rhetoric.
“China has structural advantages, because China can acquire more and better data to power AI development,” Lu told the cheering crowd of Chinese developers.

The orginal article.

Summary of “Facebook knew about Snap’s struggles months before the public”

This isn’t the first time Facebook has used Onavo’s app usage data to make major decisions.
The info reportedly influenced the decision to buy WhatsApp, as Facebook knew that WhatsApp’s dominance in some areas could cut it out of the loop.
To be clear, Facebook isn’t grabbing this data behind anyone’s back.
The revelation here is more about how Facebook uses that information rather than the collection itself.
Former Federal Trade Commission CTO Askhan Soltani tells the WSJ that Facebook is turning customers’ own data against them by using it to snuff out competitors.
Tech lawyer Adam Shevell is concerned that Facebook might be violating Apple’s App Store rules by collecting data that isn’t directly relevant to app use or ads.
No matter what, the news underscores just how hard it is for upstarts to challenge Facebook’s dominant position.
How do you compete with an internet giant that can counter your app’s features the moment it becomes popular? This doesn’t make Facebook immune to competition, but app makers definitely can’t assume that they’ll catch the firm off-guard.

The orginal article.

Summary of “‘Anonymous’ browsing data can be easily exposed, researchers reveal”

A judge’s porn preferences and the medication used by a German MP were among the personal data uncovered by two German researchers who acquired the “Anonymous” browsing habits of more than three million German citizens.
Eckert, a journalist, paired up with data scientist Andreas Dewes to acquire personal user data and see what they could glean from it.
Some were sparse users, with just a couple of dozen of sites visited in the 30-day period they examined, while others had tens of thousands of data points: the full record of their online lives.
“We often heard: ‘Browsing data? That’s no problem. But we don’t have it for Germany, we only have it for the US and UK,'” she said.
The data they were eventually given came, for free, from a data broker, which was willing to let them test their hypothetical AI advertising platform.
By creating “Fingerprints” from the data, it’s possible to compare it to other, more public, sources of what URLs people have visited, such as social media accounts, or public YouTube playlists.
Another discovery through the data collection occurred via Google Translate, which stores the text of every query put through it in the URL. From this, the researchers were able to uncover operational details about a German cybercrime investigation, since the detective involved was translating requests for assistance to foreign police forces.
So where did the data come from? It was collated from a number of browser plugins, according to Dewes, with the prime offender being “Safe surfing” tool Web of Trust.

The orginal article.

Summary of “Palantir: the ‘special ops’ tech giant that wields as much real-world power as Google”

What is Palantir protecting? A palantir is a “Seeing stone” in JRR Tolkien’s The Lord of the Rings; a dark orb used by Saruman to be able to see in darkness or blinding light.
In Iraq, the Pentagon used Palantir software to track patterns in roadside bomb deployment and worked out garage-door openers were being used as remote detonators by predicting it.
Using the most sophisticated data mining, Palantir can predict the future, seconds or years before it happens.
Palantir is at the heart of the US government, but with its other arm, Palantir Metropolis, it provides the analytical tools for hedge funds, banks and financial services firms to outsmart each other.
Palantir is exactly what it says it is: a giant digital eye like Saruman’s seeing stone in The Lord of the Rings.
The Los Angeles Police Department has used Palantir to predict who will commit a crime by swooping Minority Report-style on suspects.
In 2013, TechCrunch obtained a leaked report on the use of Palantir by the LA and Chicago police departments.
It wields as much real-world power as Google, Facebook, Amazon, Microsoft and Apple, but unlike them, Palantir operates so far under the radar, it is special ops.

The orginal article.

Summary of “Hedge Fund Uses Algae to Reap 21% Return”

Hedge fund manager Desmond Lun’s 21 percent average return over the last four years springs from an unlikely source – a petri dish of algae.
Computational biologists like Lun are late to the quant wave that’s upending hedge funds.
Lun, who has a Ph.D. from the Massachusetts Institute of Technology, spent a decade developing models that decipher how genes interact and influence each other – and published 18 academic papers related to predicting cellular behavior.
As the genome project produced reams of data, Lun saw an opportunity to break ground in computational biology and in 2006 joined the Broad Institute of MIT and Harvard, a crossroads for scientists and hedge fund managers.
There Lun met senior computational biologist Nick Patterson, a former cryptographer who had spent a decade at Renaissance Technologies making mathematical models.
Another Lun colleague, genomic researcher Jade Vinson, left Broad for the same pioneering quant hedge fund for 10 years.
Lun, who was born in Hong Kong, splits his time between his firm in Pennsylvania and lab at Rutgers, where he’s undertaken an ambitious long-term project: creating computer models that predict how cells behave, using data from blue-green algae and other sources.
The models allow Lun to re-engineer genes for useful purposes: he has modified E. coli for production of bio-fuel for transportation.

The orginal article.

Summary of “Technology Is Biased Too. How Do We Fix It?”

“Northpointe answers the question of how accurate it is for white people and black people,” said Cathy O’Neil, a data scientist who wrote the National Book Award-nominated “Weapons of Math Destruction,” “But it does not ask or care about the question of how inaccurate it is for white people and black people: How many times are you mislabeling somebody as high-risk?”.
Biased data can create feedback loops that function like a sort of algorithmic confirmation bias, where the system finds what it expects to find rather than what is objectively there.
“Part of the problem is that people trained as data scientists who build models and work with data aren’t well connected to civil rights advocates a lot of the time,” said Aaron Rieke of Upturn, a technology consulting firm that works with civil rights and consumer groups.
There are similar concerns about algorithmic bias in facial-recognition technology, which already has a far broader impact than most people realize: Over 117 million American adults have had their images entered into a law-enforcement agency’s face-recognition database, often without their consent or knowledge, and the technology remains largely unregulated.
“We’re handing over the decision of how to police our streets to people who won’t tell us how they do it.”
“A lot of these algorithmic systems rely on neural networks which aren’t really that transparent,” said Professor Alvaro Bedoya, the executive director of the Center on Privacy and Technology at Georgetown Law.
Once we move beyond the technical discussions about how to address algorithmic bias, there’s another tricky debate to be had: How are we teaching algorithms to value accuracy and fairness? And what do we decide “Accuracy” and “Fairness” mean? If we want an algorithm to be more accurate, what kind of accuracy do we decide is most important? If we want it to be more fair, whom are we most concerned with treating fairly?
Advocates say the first step is to start demanding that the institutions using these tools make deliberate choices about the moral decisions embedded in their systems, rather than shifting responsibility to the faux neutrality of data and technology.

The orginal article.

Summary of “The Business of Artificial Intelligence”

Jeff Wilke, who leads Amazon’s consumer business, says that supervised learning systems have largely replaced the memory-based filtering algorithms that were used to make personalized recommendations to customers.
Unsupervised learning systems seek to learn on their own.
Such possibilities lead Yann LeCun, the head of AI research at Facebook and a professor at NYU, to compare supervised learning systems to the frosting on the cake and unsupervised learning to the cake itself.
In reinforcement learning systems the programmer specifies the current state of the system and the goal, lists allowable actions, and describes the elements of the environment that constrain the outcomes for each of those actions.
Business models need to be rethought to take advantage of ML systems that can intelligently recommend music or movies in a personalized way.
In particular, machine learning systems often have low “Interpretability,” meaning that humans have difficulty figuring out how the systems reached their decisions.
If a system learns which job applicants to accept for an interview by using a data set of decisions made by human recruiters in the past, it may inadvertently learn to perpetuate their racial, gender, ethnic, or other biases.
A second risk is that, unlike traditional systems built on explicit logic rules, neural network systems deal with statistical truths rather than literal truths.

The orginal article.

Summary of “Jefferies gives IBM Watson a Wall Street reality check”

IBM Watson effectively operates as a consultancy where the company engages in high-value contracts with corporates to implement Watson technology for specific business cases.
IBM is struggling to bridge the gap between client needs and its own technological capability.
Jefferies pulls from an audit of a partnership between IBM Watson and MD Anderson as a case study for IBM’s broader problems scaling Watson.
MD Anderson cut its ties with IBM after wasting $60 million on a Watson project that was ultimately deemed, “Not ready for human investigational or clinical use.”
If job postings are any indication, IBM is not keeping pace with other technology companies in hiring machine learning developers.
The information provided in Jefferies’ report isn’t new nor groundbreaking, but it’s a strong signal that Wall Street is beginning to pay more close attention to the challenges facing IBM Watson.
I’ve listened to my fair share of IBM earnings calls and it’s clear the market has been focusing too heavily on short-run growth and not enough on long-term technological or strategic sustainability.
It seems perfectly reasonable that IBM shot out of the gates like a rocket in a mostly sterile AI market selling to CTOs and newly minted chief data officers with just enough anxiety to open check books.

The orginal article.

Summary of “The Netflix Prize: How a $1 Million Coding Contest Changed Streaming”

The contest didn’t just catch the attention of college students with time to kill: An hour east of Princeton, in Middletown, New Jersey, the Netflix Prize announcement caught the eye of Chris Volinsky, head of a statistics research group at AT&T, and his team, who regularly read blogs to see what was going on in the emerging data science world.
“This was before ‘Big Data,'” he tells me, and therefore a Big Deal.
He pulled his group together and asked who wanted to poke around at the data set.
He didn’t know the contest would stretch on for years.
Hobbyists, academics, and professionals weren’t just drawn to the contest by the potential payday.
The revelations were just as enticing; because the winners would retain ownership of their work, a contestant like Volinsky could also pitch management at AT&T on devoting time and resources to the project.
Most importantly, the data was just plain interesting: an unruly mess of insights into taste, behavior, and pre-streaming viewer psychology.
As Chris Volinsky put it, “Everyone likes movies.”

The orginal article.