Summary of “Why are Machine Learning Projects so Hard to Manage?”

One constant is that machine learning teams have a hard time setting goals and setting expectations.
In the first week, the accuracy went from 35% to 65% percent but then over the next several months it never got above 68%. 68% accuracy was clearly the limit on the data with the best most up-to-date machine learning techniques.
My friend Pete Skomoroch was recently telling me how frustrating it was to do engineering standups as a data scientist working on machine learning.
Engineering projects generally move forward, but machine learning projects can completely stall.
Machine learning generally works well as long as you have lots of training data *and* the data you’re running on in production looks a lot like your training data.
Machine Learning requires lots and lots of relevant training data.
What’s Next?‍.The original goal of machine learning was mostly around smart decision making, but more and more we are trying to put machine learning into products we use.
As we start to rely more and more on machine learning algorithms, machine learning becomes an engineering discipline as much as a research topic.

The orginal article.

Summary of “Meet the scientists who are training AI to diagnose mental illness”

They want to compare healthy people’s brains to those of people with mental health disorders.
Psychiatry is seeking to measure the mind, which is not quite the same thing as the brain For the Virginia Tech team looking at my brain, computational psychiatry had already teased out new insights while they were working on a study published in Science in 2008.
The algorithm can find new patterns in our social behaviors, or see where and when a certain therapeutic intervention is effective, perhaps providing a template for preventative mental health treatment through exercises one can do to rewire the brain.
With those patterns in hand, Chiu imagines the ability to diagnose more acutely, say, a certain kind of depression, one that regularly manifests itself in a specific portion of the brain.
The fMRI has its problems: for instance, scientists are not truly looking at the brain, according to Science Alert.
There is a brain chemical composition that is associated with some depressed people, Greenberg says, but not all who meet the DSM criteria.
The lab’s approach asks what the brain is doing during a task while considering the entire brain.
As the afternoon sun slants through the windows of a common area – partitioned by a math-covered wall – Chiu and King-Casas take turns bouncing their young baby and discussing a future of psychiatry in which she may live: algorithm-driven diagnostic models, targeted therapies, and brain training methods, driven by real-time fMRI results, that shift psychiatry into the arena of preventative medicine.

The orginal article.

Summary of “How artificial intelligence can help us make judges less biased”

Daniel L. Chen, a researcher at both the Toulouse School of Economics and University of Toulouse Faculty of Law, has a different idea: using AI to help correct the biased decisions of human judges.
Chen, who holds both a law degree and a doctorate in economics, has spent years collecting data on judges and US courts.
In a new working paper, Chen lays out a suggestion for how large datasets combined with artificial intelligence could help predict judges’ decisions and help us nudge them to make sentencing fairer.
It’s very well-known by now that judges’ decisions are often biased by factors that aren’t relevant to the case at hand.
That raises the question: why are the judges so predictable early before observing the facts? One interpretation is that maybe the judges are resorting to more snap judgments and heuristics to decide a case rather than the facts of it.
A big dataset can help us say that in these certain situations, the judge is more likely to be influenced in a given direction.
We have a paper showing that judges tend to be more lenient on defendant’s birthdays.
On the one hand, people might just get used to big data helping judges make decisions.

The orginal article.

Summary of “How do you fight an algorithm you cannot see? – TechCrunch”

So the activists created a platform called OpenSchufa that would attempt to discover the details of this algorithm.
Since its launch, several thousand people have donated their scores, and the activists have learned that the algorithm can be quite “Error-prone” – creating relatively negative scores without any negative evidence.
One of the biggest challenges today for machine learning is what is known as the “Black box problem.” Software engineers can test algorithms to see if their output matches the expectations of a test set, but we have no insight into how the algorithm actually arrived at its final decision.
Without data, and without publishing the algorithm, it’s extremely difficult to understand how it is making a decision.
In the case of deep learning, it’s basically impossible to understand how it is making a decision even if you do have the data and the algorithm.
That has led to a growing movement of theorists concerned about algorithmic accountability, of ensuring that we both understand how an algorithm makes a decision, and that the decision-making is legally non-discriminatory.
Social theorists like Frank Pasquale have warned that we are creating a “Black box society” in which key moments of our lives are mediated by unknown, unseen, and arbitrary algorithms.
Clearly making algorithms simpler for humans to understand and building trust in these digital decision-makers is good for society, but we have no easy pathways to that outcome.

The orginal article.

Summary of “Never mind killer robots-here are six real AI dangers to watch out for in 2019”

The past year showed that AI may cause all sorts of hazards long before that happens.
Six controversies from 2018 stand out as warnings that even the smartest AI algorithms can misbehave, or that carelessly applying them can have dire consequences.
Waymo, a subsidiary of Alphabet, has made the most progress; it rolled out the first fully autonomous taxi service in Arizona last year.
Last year, an AI peace movement took shape when Google employees learned that their employer was supplying technology to the US Air Force for classifying drone imagery.
Military use of AI is only gaining momentum and other companies, like Microsoft and Amazon, have shown no reservations about helping out.
What to watch out for in 2019: Although Pentagon spending on AI projects is increasing, activists hope a preemptive treaty banning autonomous weapons will emerge from a series of UN meetings slated for this year.
What to watch out for in 2019: Face recognition will spread to vehicles and webcams, and it will be used to track your emotions as well as your identity.
What to watch for in 2019: As deepfakes improve, people will probably start being duped by them this year.

The orginal article.

Summary of “I Gave a Bounty Hunter $300. Then He Located Our Phone”

The bounty hunter sent the number to his own contact, who would track the phone.
The bounty hunter did this all without deploying a hacking tool or having any previous knowledge of the phone’s whereabouts.
Whereas it’s common knowledge that law enforcement agencies can track phones with a warrant to service providers, IMSI catchers, or until recently via other companies that sell location data such as one called Securus, at least one company, called Microbilt, is selling phone geolocation services with little oversight to a spread of different private industries, ranging from car salesmen and property managers to bail bondsmen and bounty hunters, according to sources familiar with the company’s products and company documents obtained by Motherboard.
The investigation also shows that a wide variety of companies can access cell phone location data, and that the information trickles down from cell phone providers to a wide array of smaller players, who don’t necessarily have the correct safeguards in place to protect that data.
Your mobile phone is constantly communicating with nearby cell phone towers, so your telecom provider knows where to route calls and texts.
Armed with just a phone number, Microbilt’s “Mobile Device Verify” product can return a target’s full name and address, geolocate a phone in an individual instance, or operate as a continuous tracking service.
The bail source who originally alerted Microbilt to Motherboard said that bounty hunters have used phone geolocation services for non-work purposes, such as tracking their girlfriends.
Last year, Motherboard reported on a company that previously offered phone geolocation to bounty hunters; here Microbilt is operating even after a wave of outrage from policy makers.

The orginal article.

Summary of “Treat Failure Like a Scientist”

During her time there, Beck said that she learned how to treat failure like a scientist.
How does a scientist treat failure? And what can we learn from their approach?
That’s exactly how a scientist treats failure: as another data point.
This is much different than how society often talks about failure.
For most of us, failure feels like an indication of who we are as a person.
For the scientist, a negative result is not an indication that they are a bad scientist.
Failure will always be part of your growth for one simple reason.
To paraphrase Seth Godin: Failure is simply a cost you have to pay on the way to being right.

The orginal article.

Summary of “We tried teaching an AI to write Christmas movie plots. Hilarity ensued. Eventually.”

So we fed plot summaries of 360 Christmas movies, courtesy of Wikipedia, into a machine-learning algorithm to see if we could get it to spit out the next big holiday blockbuster.
Rubs serious a resort bet elves cared the a in day tallen shady with christmas unveiling retrieve died california awaits is groundhog after back of the wise janitor christmas traumatized the to to discover popular to his community.
We also generated Christmas movie titles using word mode for good measure.
SynopsesA family of the Christmas terrorist1 and offering the first time to be a charlichhold for a new town to fight.
A intercepting suffers and a friends up change Christmas with his and save Christmas time.
A stranded on Christmas Eve to the New York family before Christmas.
Lonely courier village newspaper by home destroy Christmas Christmas Christmas8 the prancer.
Babysitter boy tries to party the Christmas in of for more Christmas.

The orginal article.

Summary of “How Google Tracks Your Personal Information”

It started in the early 2000s, when people-in return for having access to Google products and seeing more relevant ads-allowed Google to have all their data.
Today, Google provides marketers like me with so much of your personal data that we can infer more about you from it than from any camera or microphone.
Back in December 2008, Hal Roberts, a fellow at the Berkman Klein Center for Internet & Society at Harvard, spoke about Google Ads as a form of “Gray surveillance.” Roberts described Google as “a system of collective intelligence” that, along with marketers, hoarded and exploited your data.
I will explain, in everyday language, how Google and Google Ads work “Under the hood” to track your data.
Then I will expose, from an insider’s perspective, what the vast majority of the public doesn’t know: how Google Ads is abused by search engine marketers and how people are essentially bought and sold through this platform.
I will cover what Google has tried to do to fix Google Ads.
Google users would not be so forthright with the search engine if they understood how far down this rabbit hole goes.
With the insider information I will provide, I hope readers can return to a place where Google is not the only option available to tell their fears, regrets, hopes, and dreams.

The orginal article.

Summary of “It’s time for a Bill of Data Rights”

The proliferation of data in recent decades has led some reformers to a rallying cry: “You own your data!” Eric Posner of the University of Chicago, Eric Weyl of Microsoft Research, and virtual-reality guru Jaron Lanier, among others, argue that data should be treated as a possession.
If an algorithm is unfair-if, for example, it wrongly classifies you as a health risk because it was trained on a skewed data set or simply because you’re an outlier-then letting you “Own” your data won’t make it fair.
Even if you tried to hoard data that pertains to you, corporations and governments with access to large amounts of data about other people could use that data to make inferences about you.
Even if you deny consent to “Your” data being used, an organization can use data about other people to make statistical extrapolations that affect you.
Existing solutions to unfair uses of data often involve controlling not who has access to data, but how data is used.
To make a difference for people like Rachel, a Bill of Data Rights will need a new set of institutions and legal instruments to safeguard the rights it lays out.
The new data-rights infrastructure should go further and include boards, data cooperatives, ethical data-certification schemes, specialized data-rights litigators and auditors, and data representatives who act as fiduciaries for members of the general public, able to parse the complex impacts that data can have on life.
Thinking of data as we think of a bicycle, oil, or money fails to capture how deeply relationships between citizens, the state, and the private sector have changed in the data era.

The orginal article.