Summary of “These are the best songs to dance to, according to computer science”

Those songs are among the most “Danceable” number-one hits in the history of pop, according to new research from Columbia Business School and French business school INSEAD, using data from Billboard and audio-tech company Echonest.
Developed by students at the MIT Media Lab and owned by Spotify, Echonest uses digital processing technology to identify attributes of songs, such as valence, instrumentation, and key signature.
The company created a proprietary algorithm to determine the “Danceability” of a song based on its tempo and beat regularity.
The calculation emphasizes the ability to dance throughout the whole song, so a bridge that even briefly changes the mood is highly penalized.
Although they were able to calculate danceability for more than 90% of Billboard-ranked songs, Taylor Swift’s album 1989 was not available from Echonest at the time.
The purpose of the research-published in the American Sociological Review and beautifully explained here by data scientist Colin Morris-was not to rank the most danceable mega-hits; it was to identify song features that could be predictive of mega-hits.
Researchers found that top-ranked songs tended to have more difference from past hits than lower-ranked songs, defying the trope that popular songs are just copies of other popular songs.
Still, the optimal pop song should be only slightly off the beaten bath.

The orginal article.

Summary of “Carbon tax debate: the top 5 things everyone needs to know”

A carbon tax is just what it sounds like – a per-ton tax on the carbon dioxide emissions embedded in fuels or other products.
2) A carbon tax hits coal first, hardest, and, at least early on, almost exclusively A carbon tax can reduce emissions quickly, but in the early years, reductions come overwhelmingly from a single industry: electricity.
3) The macroeconomic effect of a carbon tax depends on how the revenue is spent Republicans’ favorite attack on a carbon tax is that it will raise costs and slow economic growth – that it will be, in the words of the anti-carbon-tax resolution the House just passed, “Detrimental to the United States economy.”
4) The equity of a carbon tax also depends on how the revenue is spent A carbon tax is, in and of itself, somewhat regressive.
Using carbon tax revenues to reduce employee payroll taxes would result in a net benefit for upper middle-income taxpayers, while increasing tax burdens modestly for low-income and the highest-income households.
The economic theory behind carbon prices is that, if carbon is priced correctly – i.e., at the true “Social cost of carbon” – then the economy will respond with the optimal level of carbon reduction.
The main problem is less theoretical than practical: Political resistance has kept carbon prices well below any reasonable social cost of carbon pretty much everywhere carbon prices have been implemented.
Nowhere in the US, certainly not in the Regional Greenhouse Gas Initiative or the Western Climate Initiative, or even the carbon tax in BC, has carbon prices close to $50/ton, which is the central case in the Columbia research.

The orginal article.

Summary of “The man who invented the self-driving car – POLITICO”

Tesla and Uber got into the self-driving car business, a team of German engineers led by a scientist named Ernst Dickmanns had developed a car that could navigate French commuter traffic on its own.
Before becoming the man “Who actually invented self-driving cars”, as Berkeley computer scientist Jitendra Malik put it, Dickmanns spent the first decade of his professional life analyzing the trajectories space ships take when they reenter the Earth’s atmosphere.
An engineer remained in the front seat of each car – with his hands on the steering wheel in case something went wrong – but the cars were doing the driving.
A year later, Dickmanns’ team took a re-engineered car on an even longer trip, traveling for more than 1,700 kilometers on the autobahn from Bavaria to Denmark, reaching speeds of more than 175 kilometers per hour.
Dickmanns’ work on autonomous driving began during the first winter and ended after a second one hit the field.
To drive autonomously, a car needs to react to its surroundings, and to do that, Dickmanns calculated that computers would need to analyze at least 10 images per second.
Dickmanns thought, a car should focus only on what’s relevant for driving, such as road markings.
Driving on a highway, it turns out, is one of the easier tasks a self-driving car can perform.

The orginal article.

Summary of “Liquid water ‘lake’ revealed on Mars”

Researchers have found evidence of an existing body of liquid water on Mars.
Previous research found possible signs of intermittent liquid water flowing on the martian surface, but this is the first sign of a persistent body of water on the planet in the present day.
Lake beds like those explored by Nasa’s Curiosity rover show water was present on the surface of Mars in the past.
The planet’s climate has since cooled due to its thin atmosphere, leaving most of its water locked up in ice.
Marsis wasn’t able to determine how deep the layer of water might be, but the research team estimate that it is a minimum of one metre.
The continuous white line at the top of the radar results above marks the beginning of the South Polar Layered Deposit; a filo pastry-like accumulation of water ice and dust.
So while the findings suggest water is present, they don’t confirm anything further.
In order to remain liquid in such cold conditions, the water likely has a great many salts dissolved in it.

The orginal article.

Summary of “‘The discourse is unhinged’: how the media gets AI alarmingly wrong”

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.
Should We Stop It? The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “Realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.
While the giddy hype around AI helped generate funding for researchers at universities and in the military, by the end of the 1960s it was becoming increasingly obvious to many AI pioneers that they had grossly underestimated the difficulty of simulating the human brain in machines.
As reports of deep learning’s “Unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.
Lipton, a jazz saxophonist who decided to undertake a PhD in machine learning to challenge himself intellectually, says that as these hyped-up stories proliferate, so too does frustration among researchers with how their work is being reported on by journalists and writers who have a shallow understanding of the technology.
“If you compare a journalist’s income to an AI researcher’s income,” she says, “It becomes pretty clear pretty quickly why it is impossible for journalists to produce the type of carefully thought through writing that researchers want done about their work.” She adds that while many researchers stand to benefit from hype, as a writer who wants to critically examine these technologies, she only suffers from it.
While closer interaction between journalists and researchers would be a step in the right direction, Genevieve Bell, a professor of engineering and computer science at the Australian National University, says that stamping out hype in AI journalism is not possible.
“Experts can be really quick to dismiss how their research makes people feel, but these utopian hopes and dystopian fears have to be part of the conversations. Hype is ultimately a cultural expression that has its own important place in the discourse.”

The orginal article.

Summary of “‘Data is a fingerprint’: why you aren’t as anonymous as you think online”

In August 2016, the Australian government released an “Anonymised” data set comprising the medical billing records, including every prescription and surgery, of 2.9 million people.
“It’s convenient to pretend it’s hard to re-identify people, but it’s easy. The kinds of things we did are the kinds of things that any first year data science student could do,” said Vanessa Teague, one of the University of Melbourne researchers to reveal the flaws in the open health data.
“The point is that data that may look anonymous is not necessarily anonymous,” she said in testimony to a Department of Homeland Security privacy committee.
More recently, Yves-Alexandre de Montjoye, a computational privacy researcher, showed how the vast majority of the population can be identified from the behavioural patterns revealed by location data from mobile phones.
“Location data is a fingerprint. It’s a piece of information that’s likely to exist across a broad range of data sets and could potentially be used as a global identifier,” de Montjoye said.
Even if location data doesn’t reveal an individual’s identity, it can still put groups of people at risk, she explained.
Montjoye and others have shown time and time again that it’s simply not possible to anonymise unit record level data – data relating to individuals – no matter how stripped down that data is.
“There are firms that specialise in combining data about us from different sources to create virtual dossiers and applying data mining to influence us in various ways.”

The orginal article.

Summary of “Yet More Evidence that Viruses May Cause Alzheimer’s Disease”

For decades, the idea that a bacteria or virus could help cause Alzheimer’s disease was dismissed as a fringe theory.
In a separate experiment involving a 3D model of the human brain grown in a dish, they also studied human herpesvirus 6, the germ responsible for causing the childhood skin disease roseola.
These viruses are usually caught early on in life and stay dormant somewhere in the body, but as we age, they almost always migrate up to the brain.
The mice’s brains grew new deposits of amyloid-β plaques practically “Overnight,” according to senior author Rudy Tanzi, a geneticist specializing in the brain at Massachusetts General Hospital as well as Harvard Medical School.
The study is the second in recent weeks to support the role of viruses in Alzheimer’s disease.
That first study, also published in Neuron and led by researchers from the Icahn School of Medicine at Mount Sinai, found evidence that certain herpesviruses are more abundantly present in the brains of people who died with Alzheimer’s; it also suggested that genes belonging to these viruses directly interact with human genes that raise the risk of the disease.
From there, Tanzi’s work has shown, the plaques trigger the production of tangles-clumps of another brain protein called tau seen in the later stages of Alzheimer’s-which together then trigger chronic inflammation.
Genetics might help explain why only some people’s infections cause the brain to start producing amyloid-β en masse.

The orginal article.

Summary of “To Make Sense of the Present, Brains May Predict the Future”

Enter predictive coding theory, which offers specific formulations of how brains can be Bayesian.
These prediction errors, researchers say, help animals update their future expectations and drive decision-making.
If the brain were simply representing its perceptual experience, the strongest signal should have corresponded to “Ick” instead. But efforts are also ongoing to widen predictive coding’s relevance beyond perception and motion – to establish it as the common currency of everything going on in the brain.
Some researchers theorize that emotions and moods can be formulated in predictive coding terms: Emotions could be states the brain represents to minimize prediction error about internal signals such as body temperature, heart rate or blood pressure.
Not everyone agrees that the case for predictive coding in the brain is strengthening.
To David Heeger, a professor of psychology at New York University, it’s important to make a distinction between “Predictive coding,” which he says is about transmitting information efficiently, and “Predictive processing,” which he defines as prediction-making over time.
Last year, researchers at the University of Sussex even used virtual reality and artificial intelligence technologies that included predictive coding features to create what they called the “Hallucination Machine,” a tool that was able to mimic the altered hallucinatory states typically caused by psychedelic drugs.
Machine learning advances could be used to provide new insights into what’s happening in the brain by comparing how well predictive coding models perform against other techniques.

The orginal article.

Summary of “At any point in life, people spend their time in 25 places”

At any given time, people regularly return to a maximum of 25 places.
“We first analysed the traces of about 1000 university students. The dataset showed that the students returned to a limited number of places, even though the places changed over time. I expected to see a difference in the behavior of students and a wide section of the population. But that was not the case. The result was the same when we scaled up the project to 40,000 people of different habits and gender from all over the world. It was not expected in advance. It came as a surprise,” says Dr. Alessandretti.
The study showed that people are constantly exploring new places.
The number of regularly visited places is constantly 25 in a given period.
If a new place is added to the list, one of the places disappears.
“People are constantly balancing their curiosity and laziness. We want to explore new places but also want to exploit old ones that we like. Think of a restaurant or a gym. In doing so we adopt and abandon places all the time. We found that this dynamic yields an unexpected result: We visit a constant, fixed number of places-and it’s not due to lack of time. We found evidence that this may be connected to other limits to our life, such as the number of active social interactions we can maintain in our life, but more research is in order to clarify this point,” says Dr. Baronchelli.
The work of Dr. Baronchelli and colleagues shows that those who have a tendency to visit many places are also likely to have many friends.
Explore further: Many people feel lonely in the city, but perhaps ‘third places’ can help with that.

The orginal article.

Summary of “Why ‘Find your passion!’ may be bad advice”

The adage so commonly advised by graduation speakers might undermine how interests actually develop, according to Stanford researchers in an upcoming paper for Psychological Science.
In a series of laboratory studies, former postdoctoral fellow Paul O’Keefe, along with Stanford psychologists Carol Dweck and Gregory Walton, examined beliefs that may lead people to succeed or fail at developing their interests.
The research found that when people encounter inevitable challenges, that mindset makes it more likely people will surrender their newfound interest.
To test how these different belief systems influence the way people hone their interests, O’Keefe, Dweck and Walton conducted a series of five experiments involving 470 participants.
In the first set of experiments, the researchers recruited a group of students who identified either as “Techie” or a “Fuzzy” – Stanford vernacular to describe students interested in STEM topics versus the arts and humanities.
In another experiment, the researchers piqued students’ interest by showing them an engaging video about black holes and the origin of the universe.
The researchers found that the drop was greatest for students with a fixed mindset about interests.
“Difficulty may have signaled that it was not their interest after all,” the researchers wrote.

The orginal article.