Summary of “How Artificial Intelligence Is Changing Science”

“The approach is to say, ‘I think I know what the underlying physical laws are that give rise to everything that I see in the system.’ So I have a recipe for star formation, I have a recipe for how dark matter behaves, and so on. I put all of my hypotheses in there, and I let the simulation run. And then I ask: Does that look like reality?” What he’s done with generative modeling, he said, is “In some sense, exactly the opposite of a simulation. We don’t know anything; we don’t want to assume anything. We want the data itself to tell us what might be going on.”
The apparent success of generative modeling in a study like this obviously doesn’t mean that astronomers and graduate students have been made redundant – but it appears to represent a shift in the degree to which learning about astrophysical objects and processes can be achieved by an artificial system that has little more at its electronic fingertips than a vast pool of data.
“I just think we as a community are becoming far more sophisticated about how we use the data. In particular, we are getting much better at comparing data to data. But in my view, my work is still squarely in the observational mode.”
These systems can do all the tedious grunt work, he said, leaving you “To do the cool, interesting science on your own.”
Whether Schawinski is right in claiming that he’s found a “Third way” of doing science, or whether, as Hogg says, it’s merely traditional observation and data analysis “On steroids,” it’s clear AI is changing the flavor of scientific discovery, and it’s certainly accelerating it.
Perhaps most controversial is the question of how much information can be gleaned from data alone – a pressing question in the age of stupendously large piles of it.
In The Book of Why, the computer scientist Judea Pearl and the science writer Dana Mackenzie assert that data are “Profoundly dumb.” Questions about causality “Can never be answered from data alone,” they write.
“Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.” Schawinski sympathizes with Pearl’s position, but he described the idea of working with “Data alone” as “a bit of a straw man.” He’s never claimed to deduce cause and effect that way, he said.

The orginal article.

Summary of “A philosopher argues that an AI can’t be an artist”

Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being.
Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms-a view called computationalism.
For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist.
We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.
A molecule-for-­molecule duplicate of a human being would be human in the relevant way.
Just like previous tools of the music industry-from recording devices to synthesizers to samplers and loopers-new AI tools work by stimulating and channeling the creative abilities of the human artist.
A notional AI that comes up with a clever proof to a problem that has long befuddled human mathematicians is akin to AlphaGo and its variants: impressive, but nothing like Schoenberg.
Like a microscope, telescope, or calculator, such an AI is properly understood as a tool that enables human discovery-not as an autonomous creative agent.

The orginal article.

Summary of “DeepMind AI breakthrough on protein folding made scientists melancholy”

Do you think people walked away from the conference with realistic expectations for what AI can contribute to the field in the future?
How do you think machine learning advances will change the prestige economy that we’re used to?
Long pause] One version is to say, “This is going to make it such that being able to make sense of data will be more important, will increase in prestige.” I think that’s reasonable to expect.
From my perspective, if there were a shift from the data collection exercise to the analysis exercise, I think that’d be a good thing in a way.
Just recently news reports came out about how AI is writing articles – one-third of the articles at Bloomberg are written with the help of AI. People always say, don’t worry, it’ll be a good thing because that’ll free up the journalists to do deeper thinking on more nuanced issues rather than focusing on the “Who, what, where, when, why” – so there’s a funny parallel there.
In terms of whether you stay in academia or go to DeepMind or elsewhere, I think that’ll probably be driven by the person’s motivation.
Sigal Samuel I think there’s also a class dimension at work here, right? Someone who’s highly trained, who has highly specialized knowledge, can potentially retrain or adapt the focus of their work so they’re not competing directly against AI. Do you think it’s easier for you than for, say, a factory worker to override the gut-level fear about being made obsolete?
We’ll think of ways to change society to define our value and identity in different ways.

The orginal article.

Summary of “From Oil to Oprah: An Oral History of the StairMaster”

The side panel was silver, and it had blue script that said “The Ergometer 6000.” I said, “That’s a model number, not a brand name.” We came up with a short list of names, and StairMaster was on that list, and so that’s what we went with.
I’m looking, going, “Wow, where are all the StairMasters?” And in the corner, there was one guy welding a StairMaster together.
In 1983, the StairMaster 5000, the original revolving-staircase machine, debuted at the National Sporting Goods Association trade show in Chicago.
The StairMaster became the third large category option for people who were looking for a cardio or aerobic exercise device.
In 1986, the company debuted the StairMaster 4000 Personal Trainer-a smaller, more affordable machine with two pedal steps instead of a revolving staircase.
Oprah had her own machine, and hardly a day went by for a period of time that she didn’t mention StairMaster on her show.
Jim Campbell, a firefighter in Indiana, carried out such a feat in 2006 to raise money for a charity started in honor of his brother-in-law, who was killed during 9/11. His goal: beat the record of Seattle-area firefighter Bill Ekse, who had stayed on a StairMaster for 24 hours and walked 66,103 steps.
Tim Hawkins, vice president of global sales and marketing at Core Health and Fitness: Today’s StairMaster is a full-line brand that has its roots in climbing - in stepping, basically - but is now really a brand that encompasses the high-intensity interval training category, or HIIT, as it’s known in our industry.

The orginal article.

Summary of “Explainer: What is a quantum computer?”

A quantum computer harnesses some of the almost-mystical phenomena of quantum mechanics to deliver huge leaps forward in processing power.
The secret to a quantum computer’s power lies in its ability to generate and manipulate quantum bits, or qubits.
Quantum computers, on the other hand, use qubits, which are typically subatomic particles such as electrons or photons.
Thanks to this counterintuitive phenomenon, a quantum computer with several qubits in superposition can crunch through a vast number of potential outcomes simultaneously.
Quantum computers harness entangled qubits in a kind of quantum daisy chain to work their magic.
It’s the point at which a quantum computer can complete a mathematical calculation that is demonstrably beyond the reach of even the most powerful supercomputer.
Some businesses are buying quantum computers, while others are using ones made available through cloud computing services.
Where is a quantum computer likely to be most useful first?

The orginal article.

Summary of “Why are Machine Learning Projects so Hard to Manage?”

One constant is that machine learning teams have a hard time setting goals and setting expectations.
In the first week, the accuracy went from 35% to 65% percent but then over the next several months it never got above 68%. 68% accuracy was clearly the limit on the data with the best most up-to-date machine learning techniques.
My friend Pete Skomoroch was recently telling me how frustrating it was to do engineering standups as a data scientist working on machine learning.
Engineering projects generally move forward, but machine learning projects can completely stall.
Machine learning generally works well as long as you have lots of training data *and* the data you’re running on in production looks a lot like your training data.
Machine Learning requires lots and lots of relevant training data.
What’s Next?‍.The original goal of machine learning was mostly around smart decision making, but more and more we are trying to put machine learning into products we use.
As we start to rely more and more on machine learning algorithms, machine learning becomes an engineering discipline as much as a research topic.

The orginal article.

Summary of “Meet the scientists who are training AI to diagnose mental illness”

They want to compare healthy people’s brains to those of people with mental health disorders.
Psychiatry is seeking to measure the mind, which is not quite the same thing as the brain For the Virginia Tech team looking at my brain, computational psychiatry had already teased out new insights while they were working on a study published in Science in 2008.
The algorithm can find new patterns in our social behaviors, or see where and when a certain therapeutic intervention is effective, perhaps providing a template for preventative mental health treatment through exercises one can do to rewire the brain.
With those patterns in hand, Chiu imagines the ability to diagnose more acutely, say, a certain kind of depression, one that regularly manifests itself in a specific portion of the brain.
The fMRI has its problems: for instance, scientists are not truly looking at the brain, according to Science Alert.
There is a brain chemical composition that is associated with some depressed people, Greenberg says, but not all who meet the DSM criteria.
The lab’s approach asks what the brain is doing during a task while considering the entire brain.
As the afternoon sun slants through the windows of a common area – partitioned by a math-covered wall – Chiu and King-Casas take turns bouncing their young baby and discussing a future of psychiatry in which she may live: algorithm-driven diagnostic models, targeted therapies, and brain training methods, driven by real-time fMRI results, that shift psychiatry into the arena of preventative medicine.

The orginal article.

Summary of “The state of AI in 2019”

Experts refer to this specific instance of AI as artificial general intelligence, and if we do ever create something like this, it’ll likely to be a long way in the future.
No one is helped by exaggerating the intelligence or capabilities of AI systems.
It’s better to talk about “Machine learning” rather than AI. This is a subfield of artificial intelligence, and one that encompasses pretty much all the methods having the biggest impact on the world right now.
How does machine learning work? Over the past few years, I’ve read and watched dozens of explanations, and the distinction I’ve found most useful is right there in the name: machine learning is all about enabling computers to learn on their own.
If you’re not explicitly teaching the computer, how do you know how it’s making its decisions? Machine learning systems can’t explain their thinking, and that means your algorithm could be performing well for the wrong reasons.
Teaching computers to learn for themselves is a brilliant shortcut – and like all shortcuts, it involves cutting corners Teaching computers to learn for themselves is a brilliant shortcut.
There’s intelligence in AI systems, if you want to call it that.
Kai-Fu Lee, a venture capitalist and former AI research, describes the current moment as the “Age of implementation” – one where the technology starts “Spilling out of the lab and into the world.” Benedict Evans, another VC strategist, compares machine learning to relational databases, a type of enterprise software that made fortunes in the ’90s and revolutionized whole industries, but that’s so mundane your eyes probably glazed over just reading those two words.

The orginal article.

Summary of “What is machine learning? We drew you another flowchart”

The vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning.
Machine learning is the process that powers many of the services we use today-recommendation systems like those on Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa.
In all of these instances, each platform is collecting as much data about you as possible-what genres you like watching, what links you are clicking, which statuses you are reacting to-and using machine learning to make a highly educated guess about what you might want next.
Deep learning is machine learning on steroids: it uses a technique that gives machines an enhanced ability to find-and amplify-even the smallest patterns.
One last thing you need to know: machine learning comes in three flavors: supervised, unsupervised, and reinforcement.
In supervised learning, the most prevalent, the data is labeled to tell the machine exactly what patterns it should look for.
Lastly, we have reinforcement learning, the latest frontier of machine learning.
Reinforcement learning is the basis of Google’s AlphaGo, the program that famously beat the best human players in the complex game of Go. That’s it.

The orginal article.

Summary of “You thought fake news was bad? Deep fakes are where truth goes to die”

In April, the film director Jordan Peele and BuzzFeed released a deep fake of Barack Obama calling Trump a “Total and complete dipshit” to raise awareness about how AI-generated synthetic media might be used to distort and manipulate reality.
Hwang has been studying the spread of misinformation on online networks for a number of years, and, with the exception of the small-stakes Belgian incident, he is yet to see any examples of truly corrosive incidents of deep fakes “In the wild”.
Farid, who has spent the past 20 years developing forensic technology to identify digital forgeries, is currently working on new detection methods to counteract the spread of deep fakes.
As the threat of deep fakes intensifies, so do efforts to produce new detection methods.
Relying on forensic detection alone to combat deep fakes is becoming less viable, he believes, due to the rate at which machine learning techniques can circumvent them.
“The problem isn’t just that deep fake technology is getting better,” he said.
As the fake video of Trump that spread through social networks in Belgium earlier this year demonstrated, deep fakes don’t need to be undetectable or even convincing to be believed and do damage.
It is possible that the greatest threat posed by deep fakes lies not in the fake content itself, but in the mere possibility of their existence.

The orginal article.