Summary of “These may be the world’s first images of dogs-and they’re wearing leashes”

The engravings likely date back more than 8000 years, making them the earliest depictions of dogs, a new study reveals.
Those lines are probably leashes, suggesting that humans mastered the art of training and controlling dogs thousands of years earlier than previously thought.
“It’s truly astounding stuff,” says Melinda Zeder, an archaeozoologist at the Smithsonian Institution National Museum of Natural History in Washington, D.C. “It’s the only real demonstration we have of humans using early dogs to hunt.” But she cautions that more work will be needed to confirm both the age and meaning of the depictions.
The researchers couldn’t directly date the images, but based on the sequence of carving, the weathering of the rock, and the timing of the switch to pastoralism, “The dog art is at least 8000 to 9000 years old,” Guagnin says.
Perri has studied the bones of ancient dogs around the world, and has argued that early dogs were critical in human hunting.
The dogs look a lot like today’s Canaan dog, says Perri, a largely feral breed that roams the deserts of the Middle East.
The Arabian hunters may have used the leashes to keep valuable scent dogs close and protected, she says, or to train new dogs.
At Jubbah, the images show smaller groups of dogs that may have ambushed prey at watering holes.

The orginal article.

Summary of “Why we should bury the idea that human rituals are unique”

For anthropologists, mortuary rituals carry an outsize importance in tracing the emergence of human uniqueness – especially the capacity to think symbolically.
The discoveries soon prompted tough questions about the conventional viewpoint, suggesting that mortuary rituals might not have been uniquely human after all.
The collapse of the flower-burial hypothesis made scientists cautious to assert human beliefs based on fossil evidence.
The collapse of the flower-burial hypothesis caused scientists to be more cautious when asserting human beliefs based on limited fossil evidence – and perhaps on wish-fulfilment.
If Neanderthals and naledi are accepted into the club of hominins who practise mortuary rituals, it would not be the first time that a supposedly uniquely human behaviour turned out to be shared with other species.
This idea dovetails with a recognition of the permeable boundary between human mortuary behaviour and the behaviours of other hominins or even more distantly related species.
If Homo naledi truly did engage in symbolic behaviour, that would raise an even more sweeping question: should scientists disregard the idea of human uniqueness altogether? Some scholars have been making that argument for decades, suggesting that searching for unique traits detracts from the more useful endeavour of pinpointing smaller transitions and recognising differences of degree rather than kind.
Recognising a mosaic aspect to human behaviour has the potential to alter that perspective.

The orginal article.

Summary of “Most scientists now reject the idea that the first Americans came by land”

A group of prominent anthropologists have done an overview of the scientific literature and declare in Science magazine that the “Clovis first” hypothesis of the peopling of the Americas is dead. For decades, students were taught that the first people in the Americas were a group called the Clovis who walked over the Bering land bridge about 13,500 years ago.
Evidence has been piling up since the 1980s of human campsites in North and South America that date back much earlier than 13,500 years.
In the 2000s, overwhelming evidence suggested that a pre-Clovis group had come to the Americans before there was an ice-free passage connecting Beringia to the Americas.
As Smithsonian anthropologist Torben C. Rick and his colleagues put it, “In a dramatic intellectual turnabout, most archaeologists and other scholars now believe that the earliest Americans followed Pacific Rim shorelines from northeast Asia to Beringia and the Americas.”
Now scholars are supporting the “Kelp highway hypothesis,” which holds that people reached the Americas when glaciers withdrew from the coasts of the Pacific Northwest 17,000 years ago, creating “a possible dispersal corridor rich in aquatic and terrestrial resources.” Humans were able to boat and hike into the Americas along the coast due to the food-rich ecosystem provided by coastal kelp forests, which attracted fish, crustaceans, and more.
No one disputes that the Clovis peoples came through Beringia and the ice free corridor.
Despite all the evidence for human habitation, ranging from tools and butchered animal bones to the remains of campfires, scientists are still uncertain who the pre-Clovis peoples were.
To the best of our knowledge, the kelp highway brought humans to the Americas.

The orginal article.

Summary of “You Will Lose Your Job to a Robot-and Sooner Than You Think – Mother Jones”

Erasers have to do with the fact that we’re all going to be out of a job in a few decades? Consider: Last October, an Uber trucking subsidiary named Otto delivered 2,000 cases of Budweiser 120 miles from Fort Collins, Colorado, to Colorado Springs-without a driver at the wheel.
No matter what job you name, robots will be able to do it.
Surowiecki also points out that job churn is low, average job tenure hasn’t changed much in decades, and wages are rising-though he admits that wage increases are “Meager by historical standards.”
In the even nearer term, the World Economic Forum predicts that the rich world will lose 5 million jobs to robots by 2020, while a group of AI experts, writing in Scientific American, figures that 40 percent of the 500 biggest companies will vanish within a decade.
The time will probably come when we actively want to do just the opposite: provide an income large enough to motivate people to leave the workforce and let robots do the job better.
As large-scale job losses from automation start to become real, we should expect the idea to spread rapidly.
A robot tax could still have value as a way of modestly slowing down job losses.
Eric Schmidt, chairman of Google’s parent company, believes that AI is coming faster than we think, and that we should provide jobs to everyone during the transition.

The orginal article.

Summary of “The Difference Between Universal and Prepared Horror”

The most effective monsters of horror fiction mirror ancestral dangers to exploit evolved human fears.
Horror artists typically want to target the greatest possible audience and that means targeting the most common fears.
As the writer Thomas F. Monteleone has observed, “a horror writer has to have an unconscious sense or knowledge of what’s going to be a universal ‘trigger.'”1 All common fears can be located within a few biologically constrained categories or domains.
The most basic, universal, genetically hardwired fears are the fears of sudden, loud noises and of looming objects-those are the fears that we aim to evoke when we hide behind a door, waiting to spook an unsuspecting friend by jumping at them with a roar.
Horror video games and horror films, in particular, exploit this innate fear when they resort to “Jump scares,” such as having a monster jump out of a closet without any warning and frightening the viewer or player.
Prepared fears are innate in the sense that they are genetically transmitted but require environmental input for their activation.
While different environments feature different threats, some threats have been evolutionarily persistent enough, and serious enough, to have left an imprint on our genome as prepared fears, as potentialities that may be activated during an individual’s life in response to personal or vicarious experience, or culturally transmitted information.
The 2012 ChildFund Alliance report “Small Voices, Big Dreams,” which quantified children’s fears and dreams based on responses from 5,100 individuals from 44 countries, found that the most common fear among children across developing and developed countries is the fear of “Dangerous animals and insects.”10 Even children growing up in industrialized, urban environments free of nonhuman predators easily acquire fear of dangerous animals because such prepared learning is part and parcel of human nature.

The orginal article.

Summary of “What We’re Missing About the Sixth Extinction”

Myriad species, thanks in large part to humans who inadvertently transport them around the world, have blossomed in new regions, mated with like species and formed new hybrids that have themselves gone forth and prospered.
“Virtually all countries and islands in the world have experienced substantial increases in the numbers of species that can be found in and on them,” writes Thomas in his new book, Inheritors of the Earth: How Nature Is Thriving in an Age of Extinction.
If you take the mainland North America north of Mexican border, for which there’s good data, and mainland Europe, we know of more hybrid plant species in both of these regions that have come into existence over the last 300 years than we know of plant species that have become completely extinct.
People seem to think it’ll be 10 or so percent of the world’s species that might be endangered, and many of these are these mountain species with nowhere to run.
If extinction of whole species does not convince skeptics, I’d just say, “New Orleans and London, not to mention the farmers of Bangladesh.” They are going to be in deep trouble once the sea levels rise seriously.
You tell us this loss of around 10 percent of species “Falls far below the level of extinction required to match one of the previous ‘Big Five’ mass extinctions in the geological past.” Is the Sixth Extinction overrated?
Well, I agree there has been a huge acceleration of the extinction rate in the human epoch, and that if we keep up the current rate of extinction for the next 10,000 years, we end up with a mass extinction of 75 percent of species going extinct.
You spotted the fatal flaw! But if I’m not allowed to use the word “Natural,” and I accept it has no special meaning, I would just have to say that the Earth’s system is simply what it is: It now contains the human species as well as other species.

The orginal article.

Summary of “The human microbiome, explained: How bad science and junky diets gave rise to serious disease”

Research has so far shown that a poor or altered human microbiome is associated with inflammatory bowel syndrome, chronic diarrhea, stomach pains, gas, bloating, food allergies and more.
Some are researching the connections between the human microbiome and autism, Alzheimer’s disease, depression, our immune system or our metabolism.
“It’s everything. Everything has been somewhat directly or indirectly linked to the human microbiome – even cancers,” Dr. Kwang Sik Kim, a professor of pediatrics at the Johns Hopkins University School of Medicine, said in a phone interview.
There are an estimated 5 million different genes in the average human microbiome, and our experiences as infants determine which bacteria get introduced to our guts.
“You’re less likely to develop allergies, you’re less likely to develop inflammatory bowel disease is more regulated when you grow up in an environment where you have dirtier surroundings. Your microbiome has seen more pathogens that train to be more healthier when the individual grows up.”
Much of the science on the human microbiome is still preliminary, partly because the field really only “Caught on within the last five years,” Hultin said.
Cookbooks have wholesale packaged the idea into a sellable “Microbiome diet,” while some companies advertise “Microbiome purification shakes” or 30-day diet plans.
Despite the probiotic supplement industry’s estimated $36.6 billion value, there isn’t enough hard evidence showing that they could dramatically improve the human microbiome, according to Hultin, Kao and Kim.

The orginal article.

Summary of “Having children is not life-affirming: it’s immoral”

Life is simply much worse than most people think, and there are powerful drives to affirm life even when life is terrible.
A robust life of 90 years is much closer to 10 or 20 years than it is to a life of 10,000 or 20,000 years.
If we are interested in the second question, we cannot answer it simply by noting that human life is as good as human life is, which is what employing human standards involves.
If we are to say that somebody’s life is not worth continuing, the bad things in life do need to be sufficiently bad to override the interest in not dying.
So the quality of a life must be worse in order for the life to be not worth continuing than it need be in order for it to be not worth starting.
The difference between a life not worth starting and a life not worth continuing partly explains why anti-natalism does not imply either suicide or murder.
If the quality of one’s life is still not bad enough to override one’s interest in not dying, then one’s life is still worth continuing, even though the current and future harms are sufficient to make it the case that one’s life was not worth starting.
The confusion between starting a life and continuing a life is not the only way in which life-affirmation clouds people’s ability to see that life contains more bad than good.

The orginal article.

Summary of “Wolf Puppies Are Adorable. Then Comes the Call of the Wild.”

That’s very important, because both wolves and dogs go through a critical period as puppies when they explore the world and learn who their friends and family are.
Dr. Lord thinks this shift in development, allowing dogs to use all their senses, might be key to their greater ability to connect with human beings.
Perhaps with more senses in action, they are more able to generalize from tolerating individual humans with a specific scent to tolerating humans in general with a scent, sight and sound profile.
One, said Dr. Karlsson: “How did a wolf that was living in the forest become a dog that was living in our homes?”.
Learning about dogs could provide insights to some human conditions in which social interaction is affected, like autism, or Williams syndrome, or schizophrenia.
The humans were still groggy from a night with little sleep.
Wolf mothers prompt their pups to urinate and defecate by licking their abdomens.
The human handlers massaged the pups for the same reason, but often the urination was unpredictable, so the main subject of conversation when I arrived was wolf pup pee.

The orginal article.

Summary of “Artificial Intelligence: Douglas Hofstadter on why AI is far from intelligent”

Since winning a Pulitzer Prize in nonfiction for his 1979 book Gödel, Escher, Bach: an Eternal Golden Braid, Hofstadter, 72, has been quietly thinking about thinking, and how we might get computers to do it.
In the early days of AI research in 1950s and 60s, the goal was to create computers that think and learn the way humans do, by remodeling our ability to intuitively understand the world around us.
Thinking turned out to be more complicated than something that could fit in a 1950s computer program.
What did eventually yield results was giving up on thinking altogether, focusing computers instead on highly specific tasks and giving them vast amounts of relevant data-resulting in the AI boom we see today.
Through all of these shifts in AI, Hofstadter, a professor of cognitive science and comparative literature at Indiana University, has been trying to understand how thinking works.
Douglas Hofstadter: When I think about translation I think about creating a text in a second language that is equally good as the text was in the original language.
QZ: Do you think it’s possible for a computer to do literary or elegant translation at the level of a human without this kind of thinking?
DH: If you ask me in principle if it’s possible for a computing hardware to do something like thinking, I would say absolutely it’s possible.

The orginal article.