December 10, 2018.Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.
Digital life is augmenting human capacities and disrupting eons-old human activities.
The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.
Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
Marina Gorbis, executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”
Michael M. Roberts, first president and CEO of the Internet Corporation for Assigned Names and Numbers and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”
Danah boyd, a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”
Batya Friedman, a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”
The orginal article.
The internet fundamentally changed the way we live, work, and play, and the nature of work itself has transitioned in large part from algorithmic tasks to heuristic ones that require critical thinking, problem-solving, and creativity.
Jason Fried, co-founder of Basecamp and author of It Doesn’t Have to Be Crazy at Work, said on my podcast, Future Squared, that for creative jobs such as programming and writing, people need time to truly think about the work that they’re doing.
“If you asked them when the last time they had a chance to really think at work was, most people would tell you they haven’t had a chance to think in quite a long time, which is really unfortunate.”
“People waste a lot of time at work,” according to Grant.
Cal Newport, best-selling author of Deep Work: Rules for Focused Success in a Distracted World, echoes Grant’s sentiments, saying that “Three to four hours of continuous, undisturbed deep work each day is all it takes to see a transformational change in our productivity and our lives.”
The team maintained, and in some cases increased, its quantity and quality of work, with people reporting an improved mental state, and that they had more time for rest, family, friends, and other endeavors.
Block out time in your calendar, work on one thing at a time, do the hardest thing first, try listening to binaural beats or use the Pomodoro technique, a time management method that uses a timer to break work down into intervals, traditionally 25 minutes in length, separated by short breaks.
Organizations are spending big money on digital transformation, but they could reap an immediate, and far more cost-effective transformational benefit just by changing the way they work, instead of what they use to work.
The orginal article.
America’s political past was not more procedural and restrained than its present, and religion does not, in general, calm political divides.
Even as Sullivan decries political tribalism, here is his theory of it: A decline in people practicing his form of Christian faith has led to a rise in “Political cultists” who find their ultimate meaning in politics, who will stop at nothing to achieve their political goals, and who cannot be reasoned or compromised with.
“A lot of what people nostalgically consider eras without tribalism are in fact moments in American history where people of color, particularly black people, have been deprived of political power, and so things like ethnic and racial lines became less salient.”
Another way to put it is that social justice theory encourages the consideration of privilege in order to prevent people from being so blinded by their own perspective that they look at America’s political past and declaim this the era in which we departed from political proceduralism and collapsed into illiberalism.
Their leaders have turned Christianity into a political and social identity, not a lived faith, and much of their flock – a staggering 81 percent voted for Trump – has signed on.
As a matter of political analysis, Sullivan is trying to close a gaping hole in his argument by defining his Christian practice as true and competing interpretations, no matter how widespread they are, as aberrant.
Towards a better explanation of political tribalism Sullivan is grappling for an explanation of rising political tribalism, and there, he may want to dispense with the introspection and explore the work of people who actually study it, like political scientist Lilliana Mason.
If our explanation for political tribalism takes the form of “It’s everyone else’s fault,” more likely than not, we have gone awry.
The orginal article.
In the vast majority of cases, expressing anger resulted in all parties becoming more willing to listen, more inclined to speak honestly, more accommodating of each other’s complaints.
We’re more likely to perceive people who express anger as competent, powerful, and the kinds of leaders who will overcome challenges.
“When we become angry, we feel like we’re taking control, like we’re getting power over something.” Watching angry people-as viewers of reality television know-is highly entertaining, so expressing anger is a surefire method for capturing the attention of an otherwise indifferent crowd.
If moral indignation persists, however-and if the indignant lose faith that their anger is being heard-it can produce a third type of anger: a desire for revenge against our enemies that privileges inflicting punishment over reaching accord.
Scholars, in examining successful protest movements, have sought to explain how anger goes from the fleeting feeling that Averill studied to a pervasive, more powerful moral force.
The political actors who use anger to more cynical ends still have the upper hand.
The ways in which anger is constantly stoked from every side is new, and the partisan divide that such anger fosters may have pushed us further down a path toward widespread violence than we realize.
As America reaches the midpoint of a presidential administration that has driven nearly everyone into a rage of one kind or another, we are at a crossroads: Will we continue, blindly furious? Or will we see our rage as a disease that must be cured?
The goal shouldn’t be to eradicate anger.
The orginal article.
Dealing effectively with emotions is a key leadership skill.
Naming our emotions – what psychologists call labeling – is an important first step in dealing with them effectively.
Why didn’t the project work? And what’s going to become of her job now? All of these emotions feed into her anger, but they are also separate feelings that she should identify and address.
Like them, we need a more nuanced vocabulary for emotions, not just for the sake of being more precise, but because incorrectly diagnosing our emotions makes us respond incorrectly.
You might be surprised at the breadth of your emotions – or that you’ve unearthed a deeper emotion buried beneath the more obvious one.
These experiments also revealed that over time those who wrote about their feelings began to develop insights into what those feelings meant, using phrases such as “I have learned,” “It struck me that,” “The reason that,” “I now realize,” and “I understand.” The process of writing allowed them to gain a new perspective on their emotions and to understand them and their implications more clearly.
You can also use these three approaches-broadening your vocabulary, noting the intensity of an emotion, and writing it out-when trying to better understand another person’s emotions.
Once you understand what you are feeling, then you can better address and learn from those more accurately described emotions.
The orginal article.
Books increasingly don’t have covers: The rapid rise of tablets and e-readers has led to more books being read on screens, which de-emphasize the cover as both a visual identifier and a physical delimiter.
Now, on screens, covers persist as vestigial rectangular images, superfluously ornamenting search results or PDFs. Does that shift in emphasis mean readers engage more directly with texts themselves, rather than judging books by their covers as the cliché warns? Fifty Shades of Grey and self-help books boomed in popularity on post-cover devices.
While the design of libraries and bookstores prioritizes the coherent visual display of book covers and spines so that people can navigate collections and find the singular physical objects the covers signify, the endlessly rewritable surface of the screen dispenses with that arrangement.
Covers aren’t essential for discovering content on platforms.
If covers can be construed as misleading or superficial wrappers, platform algorithms are hardly more honest.
These dynamics highlight how, on platforms like Spotify and Netflix, specific artists and their works are not the objects offered to the users for consumption – a focus that covers supported.
A book’s cover belonged traditionally not to the book itself but rather to the retail or public environment in which a book is deployed and displayed, in which it claims its place among other books, and in relation to the public eye and mind of the citizen-reader.
While a book cover wrapped an individual work – an independently defined, freestanding unit of content – a platform interface wraps the entire collection of works that users can access through it.
The orginal article.
Putting names to faces, like formulating conspiracy theories, relies on pattern recognition.
In the late sixties and early seventies, computer scientists began trying to use a digital form of pattern recognition to identify faces in photographs.
The company’s researchers used a version of the system on tennis matches at the 2017 U.S. Open; it worked flawlessly, Smith said, except with one player, who appeared to the computer to be pumping her fist when she was actually just wiping her face with a towel.
“So, if we have a system that looks at them when they aren’t afraid, we may see the pain sooner.” David Hunt said, “One of the joys of facial recognition is that we can see cows’ natural behavior, instead of ‘Uh-oh, girls, calm down, don’t make eye contact with the predator.'” Once the company’s algorithms have been fully trained, a farmer won’t have to be present even to know that a cow is about to calve-something that happens on Lawlor’s farm an average of once a day.
In 2016, she was the lead author of “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” a study whose title refers to the fact that many states allow police departments to search their databases of mug shots and driver’s-license photos.
Faces, unlike fingerprints or iris patterns, can easily be recorded without the knowledge of the people they belong to, and that means that facial recognition can be used for remote surveillance.
“Yet that’s what face recognition enables.” Computer-vision systems potentially allow cops and employers to track behaviors and activities that are none of their business, such as where you hang out after work, which fund-raisers you attend, and what that slight tremor in your hand portends about the size of your future medical claims.
The reliable real-time identification of more than a billion people by their faces alone is not possible yet, but the Chinese system doesn’t depend on faces alone.
The orginal article.
I should know: For the last three decades, since I started graduate school at the Massachusetts Institute of Technology, studying with the inspiring cognitive scientist Steven Pinker, I have been embroiled in on-again, off-again debate about the nature of the human mind, and the best way to build AI. I have taken the sometimes unpopular position that techniques like deep learning aren’t enough to capture the richness of the human mind.
When 140 characters no longer seemed like enough, I tried to take a step back, to explain why deep learning might not be enough, and where we perhaps ought to look for another idea that might combine with deep learning to take AI to the next level.
In a series of tweets he claimed that I hate deep learning, and that because I was not personally an algorithm developer, I had no right to speak critically; for good measure, he said that if I had finally seen the light of deep learning, it was only in the last few days, in the space of our Twitter discussion.
I have been giving deep learning some credit ever since I first wrote about it as such: in The New Yorker in 2012, in my January 2018 Deep Learning: A Critical Appraisal article, in which I explicitly said, “I don’t think we should abandon deep learning,” and on many occasions in between.
To take another example, consider a widely-read 2015 article in Nature on deep learning by LeCun, Bengio, and Geoffrey Hinton, the trio most associated with the invention of deep learning.
There again much of what was said is true, but there was almost nothing acknowledged about the limits of deep learning, so that it would be easy to walk away from the paper imagining that deep learning is a much broader tool than it really is.
The paper’s conclusion furthers that impression by suggesting that deep learning’s historical antithesis-symbol-manipulation/classical AI-should be replaced: “New paradigms are needed to replace the rule-based manipulation of symbolic expressions on large vectors.” The traditional ending of many scientific papers-limits-is essentially missing, inviting the inference that the horizons for deep learning are limitless.
When I rail about deep learning, it’s not because I think it should be “Replaced”, but because I think that it has been oversold, often with vastly greater attention to its strengths than its potential limitations, and exuberance for deep learning is often accompanied by a hostility to symbol-manipulation that I believe is a foundational mistake in the ultimate solution to AI. I think it is far more likely that the two-deep learning and symbol-manipulation-will co-exist, with deep learning handling many aspects of perceptual classification, but symbol-manipulation playing a vital role in reasoning about abstract knowledge.
The orginal article.
The gilets jaunes, or yellow vests, in France, have been the subject of anxiety, controversy, and, at times, shameless political opportunism on all sides.
They are a popular movement of no clear political view or ideology; they take their name from the yellow vests that drivers in France are required to keep in their cars, to be worn in the case of a breakdown.
Their ostensible ignition point was a rise in fuel taxes, engineered by the government of President Emmanuel Macron, for, as it happens, impeccably green reasons: the plan was to wean France off fossil fuels by making them more expensive, and to encourage the use of renewable sources.
Last week, the protests reached Paris, where the gilets jaunes-or, by most reports, members of the largely rural group aided by extreme leftists and even more extreme rightists, both prepped for street battle-rioted on the Champs Élysées, vandalized the Arc de Triomphe, and broke into stores, creating a crisis of a kind that has brought down or impeded the progress of French governments continuously throughout the postwar era.
The dynamic of violent street demonstration resulting in government recoil-on Tuesday Macron’s government folded and suspended the fuel-tax hikes-is not only familiar in France, it is pretty much the most predictable cycle of its modern political life.
As “France’s Long Reconstruction,” a fine new history of the Fifth Republic by Herrick Chapman, a professor of history at New York University, makes explicit and highly explanatory, the Constitution of the Fifth Republic, as established by Charles de Gaulle in 1958, so centralized power in the Presidential palace that it had the unintended cyclical effect of making street protests and manifestations the only dynamic alternative to government policy.
It is an irony of the movement that it takes its name from a rule of centralized government: those mandatory yellow vests.
In the years before the Fifth Republic, one thinks of the right-wing riots that vandalized Paris as part of the mob warfare between the extreme left and the extreme right, which scarred France in the nineteen-thirties, or of the once hugely popular movement of the poujadistes, in the nineteen-fifties-a movement of small shopkeepers who, like the gilets jaunes, felt afflicted and ignored by the central government and had a similarly contradictory politics.
The orginal article.
If rock groups are businesses, businesses are getting more like rock bands.
The band had no designated frontman; all four Beatles were capable of singing lead. Though Lennon was the de facto leader in the early years, one of the band’s innovations was not to call itself “Johnny and the Beatles”, as was conventional at the time.
“You can’t do this deal where you’re giving everybody in the band an equal cut of money,” Roberts said, “Because there’s going to be a big problem at some point. You’re going to feel really bitter and used. I’ve been down this road with bands before. It explodes, and everyone walks away.” Petty listened.
“Bands start off as exercises in us-against-the-world idealism, in which success lifts all to equal heights. The ones that don’t break up before they reach a recording studio are the ones that adjust their philosophy in order to become a business. A redistribution of power is necessary.” Petty’s decision was hard for everyone, he said.
On stage, Bruce Springsteen celebrates the ties that bind him to the E Street Band, but in his autobiography he is matter of fact: “Democracy in a band…is often a ticking time bomb. If I was going to carry the workload and responsibility, I might as well assume the power. I’ve always believed that the E Street Band’s continued existence is partially due to the fact that there was little to no role confusion among its members.” By which he means, there is no confusion over who’s the boss.
As Tony Fletcher explains in “Perfect Circle”, his biography of the band, each member received an equal share of publishing royalties, regardless of who contributed what to each song.
Jagger embarked on a solo career and seemed to be seeking an escape from the band, possibly because he was tired of dealing with Richards, who had shaken off a debilitating dependence on heroin only to replace it with one on alcohol.
In the band’s early days, guitarist Brian Jones was the band’s chief creative force, but he made for an unpredictable, unpleasant colleague.
The orginal article.