Summary of “No, You Don’t Really Look Like That”

Over weeks of taking photos with the device, I realized that the camera had crossed a threshold between photograph and fauxtograph.
Now, under the hood, phone cameras pull information from multiple image inputs into one picture output, along with drawing on neural networks trained to understand the scenes they’re being pointed at.
It’s not just that the camera knows there’s a face and where the eyes are.
Google developed new techniques for combining multiple inferior images into one superior image.
In the right hands, an HDR photo could create a scene that is much more like what our eyes see than what most cameras normally produce.
They’ve incorporated HDR into their default cameras, drawing from a burst of images.
As with the skin-smoothing, it no longer really matters if that’s what our eyes would see.
Since the 19th century, cameras have been able to capture images at different speeds, wavelengths, and magnifications, which reveal previously hidden worlds.

The orginal article.

Summary of “7 Books That Will Change How You See The World”

My favorite moments reading non-fiction are when a book bitchslaps my brain and reconfigures my entire understanding of reality and my place within it.
I never know what the hell to say because so many of the books that have influenced me have done so not because they’re so good or brilliant, but mostly because they addressed the issues I was going through at the time I was reading them.
So instead of divulging what my favorite books are, I’ll leave you with something better: seven of the most mind-fucking, reality-reshaping, Keanu Reeves “Whoa” inspiring books that I’ve ever read. In no particular order.
Bonus Points For: Being perhaps the wittiest and best-written psychology book you’ll ever read.If This Book Could Be Summarized in An Image, That Image Would Be: A dog named “Humanity” endlessly chasing its own tail with a big slobbery smile on its face.
You want to read a book that explains happiness without mythologizing it or worshipping it.
If he’s trolling the world with his writing style, he’s doing a good job, because some passages are almost impossible to get through without either rolling your eyes at him or shoving the book through a paper shredder.
You want to have your conception of “Success” and “Progress” completely flipped on its head. you want to read a book that while consisting of maybe 60% bullshit, will have you still thinking about the ideas years later.
Bonus Points For: It was apparently one of President Eisenhower’s favorite books.

The orginal article.

Summary of “Instagram’s Facetune and the endless pursuit of physical perfection”

Facetune is the ultimate culmination of those two forces: A cheap, easy-to-use Photoshop alternative in the pocket of anyone with a smartphone, allowing them to smooth, slim, or skew any part of their face or body in an instant.
There have been ripple effects, too: In the more than five years that Facetune has existed, it has helped give rise to an aesthetic sameness known as “Instagram Face” and produced an entire cottage industry devoted to exposing the differences between our constructed faces and our real ones.
More than any of that, Facetune has been at the center of conversations around the discrepancies between our crafted online selves and the messy realities of life inside of a body.
Unlike Adobe Photoshop, with its endless array of confusing symbols that can require a full course to understand, Facetune offers just a handful of the most relevant tools for skin-smoothing and reshaping.
On Facetune 2, those features are even easier to use: There are tools that can instantly contort the subject’s expression into one that is “Fierce” or “Cute,” which creates a more crooked smile.
Of course, there’s the dead giveaway to anyone trained in catching FaceTune offenses: The conspicuous curves in the vertical lines of the fence behind me, betraying the fact that I had committed the cardinal sin of falsely narrowing my body.
A scroll through many reality stars’ Instagram pages will reveal many perfectly posed and heavily Facetuned images, as if the melodrama of their televised lives were but a distant memory.
A successful niche of influencers like Emma Chamberlain and Joana Ceddia now have forgone Facetune and top-down latte shots in favor of goofy lo-fi realness and “Relatability.” The latest cool Instagram filters don’t give users doe eyes and cheekbones, they make them look like glossy robots and surrealist art.

The orginal article.

Summary of “Walkman turns 40 today: How listening to music changed over the years”

Walkman turns 40 today: How listening to music changed over the years – Business Insider Facebook IconThe letter F. Link iconAn image of a chain link.
Twitter iconA stylized bird with an open mouth, tweeting.
Fliboard iconA stylized letter F. More iconThree evenly spaced dots forming an ellipsis: “…”.
Close iconTwo crossed lines that form an ‘X’. It indicates a way to close an interaction, or dismiss a notification.
Bettmann The first CD player was considered too expensive for the average consumer – it cost $1,000, equivalent to about $2,600 today.
The product took time to make an impression on the public.
Fairfax Media/Fairfax Media via Getty Images Ian Waldie/Getty Images SEE ALSO: What Silicon Valley looked like during the early days of the tech boom DON’T MISS: 9 predictions from old sci-fi movies that actually came true More: Close iconTwo crossed lines that form an ‘X’. It indicates a way to close an interaction, or dismiss a notification.
It indicates a confirmation of your intended interaction.

The orginal article.

Summary of “No, You Don’t Really Look Like That”

Over weeks of taking photos with the device, I realized that the camera had crossed a threshold between photograph and fauxtograph.
Now, under the hood, phone cameras pull information from multiple image inputs into one picture output, along with drawing on neural networks trained to understand the scenes they’re being pointed at.
It’s not just that the camera knows there’s a face and where the eyes are.
Google developed new techniques for combining multiple inferior images into one superior image.
In the right hands, an HDR photo could create a scene that is much more like what our eyes see than what most cameras normally produce.
They’ve incorporated HDR into their default cameras, drawing from a burst of images.
As with the skin-smoothing, it no longer really matters if that’s what our eyes would see.
Since the 19th century, cameras have been able to capture images at different speeds, wavelengths, and magnifications, which reveal previously hidden worlds.

The orginal article.

Summary of “Mona Lisa frown: Machine learning brings old paintings and photos to life – TechCrunch”

It’s a new method of applying facial landmarks on a source face – any talking head will do – to the facial data of a target face, making the target face do what the source face does.
We can already make a face in one video reflect the face in another in terms of what the person is saying or where they’re looking.
Most of these models require a considerable amount of data, for instance a minute or two of video to analyze.
The new paper by Samsung’s Moscow-based researchers shows that using only a single image of a person’s face, a video can be generated of that face turning, speaking, and making ordinary expressions – with convincing, though far from flawless, fidelity.
It does this by frontloading the facial landmark identification process with a huge amount of data, making the model highly efficient at finding the parts of the target face that correspond to the source.
It’s also using what’s called a Generative Adversarial Network, which essentially pits two models against one another, one trying to fool the other into thinking what it creates is “Real.” By these means the results meet a certain level of realism set by the creators – the “Discriminator” model has to be, say, 90 percent sure this is a human face for the process to continue.
Which attempt to replicate a person whose image was taken from cable enws, also recreate the news ticker shown at the bottom of the image, filling it with gibberish.
Note that this only works on the face and upper torso – you couldn’t make the Mona Lisa snap her fingers or dance.

The orginal article.

Summary of “Einstein v Newton: the final battle during a total eclipse”

‘In journeying to observe a total eclipse of the Sun, the astronomer quits the usually staid course of his work and indulges in a heavy gamble with fortune,’ wrote the young Eddington.
The first step for Eddington and his fellow physicist, the Astronomer Royal Frank Watson Dyson, was simply to figure out where and when the eclipse would be visible.
Eddington would go to Príncipe, accompanied by Edwin T Cottingham, a clockmaker who had worked for years with both Dyson and Eddington maintaining the timepieces at their observatories.
Eddington declared that there were three possibilities: no deflection; 1.75 arc-seconds, the Einstein prediction; or 0.87 arc-seconds, which would support Newtonian gravity and challenge the ideas of Einstein.
Bringing the world around to his conclusion that Einstein was right would take Eddington months of tedious measurement and calculation.
The explosion of interest finally made it possible for Eddington and Einstein to write directly to each other.
‘All England has been talking about your theory it is the best possible thing that could have happened for scientific relations between England and Germany,’ Eddington wrote to Einstein later that year.
The eclipse expedition became a symbol of German-British solidarity because Eddington chose to craft it that way.

The orginal article.

Summary of “Brain scans reveal a ‘pokémon region’ in adults who played as kids”

By scanning the brains of adults who played Pokémon as kids, researchers learned that this group of people have a brain region that responds more to the cartoon characters than to other pictures.
For the study, published today in the journal Nature Human Behavior, researchers recruited 11 adults who were “Experienced” Pokémon players – meaning they began playing between the ages of five and eight, continued for a while, and then played again as adults – and 11 novices.
In experienced players, a specific region responded more to the pokémon than to these other images.
We already know that the brain has cell clusters that respond to certain images, and there’s even one for recognizing Jennifer Aniston.
What predicts which part of the brain will respond? Does the brain categorize images based on how animated or still they are? Is it based on how round or linear an object is?
The usual way to investigate this is to teach children to recognize a new visual stimulus and then see which brain region reacts.
The results support a theory called “Eccentricity bias,” which suggests that the size of the images we’re looking at and whether we’re looking at it with central or peripheral vision will predict which area of the brain will respond.
He’s also done scans of kids looking at pokémon, and he says that similar methods could be used when it comes to sound.

The orginal article.

Summary of “Artificial intelligence is helping old video games look like new”

One of the more unexpected applications has been in the world of video game mods.
Fans have discovered that machine learning is the perfect tool to improve the graphics of classic games.
As a consequence, there’s been an explosion of new graphics for old games over the past six months or so.
The range of titles is impressive, spanning the decades from early SNES games like Mario Kart and F-Zero, which were originally released in the 1990s, to more recent fare like 2010’s Mass Effect 2.
If you want to upscale a 50 x 50-pixel image to double its size, for example, a traditional algorithm just inserts new pixels between the existing ones, selecting the new pixels’ color based on an average of its neighbors.
Replaying the favorite video games of one’s youth can be a surprisingly bittersweet experience: the memories are intact, but the games themselves seem strangely ugly and raw.
“Your mind finished the job and filled in the gaps [but] modern displays show these old games in their un-filtered roughness.”
Luckily, these early games are also the perfect target for AI upscaling.

The orginal article.

Summary of “Katie Bouman and the Black Hole That Made Her Famous”

Memes and videos across Reddit, Twitter, YouTube, and other platforms called Bouman a fraud and “Debunked” her contributions to the discovery.
The reaction to Bouman seems specific to this particular cultural moment, in which divergent views of gender, media, and science, usually flowing in their own little streams, smash together to form a massive riptide.
Read: An extraordinary image of the black hole at a galaxy’s heart.
The reality of the person at the center-the Katie Bouman that exists outside these few pictures-can get lost.
In one, Bouman is a hero; in the other, she’s a villain.
Bouman, the post said, “Led the creation of a new algorithm to produce the first-ever image of a black hole.”
Bouman’s new fans wanted to rescue the young computer scientist from the pantheon of unsung women in science-including Rosalind Franklin and Vera Rubin and Henrietta Leavitt, to name just a few-whose contributions went unrecognized in their moment and were honored only many years later, sometimes long after their male colleagues had received awards for the same work.
In another viral tweet, the MIT account juxtaposed a picture of Bouman with stacks of the hard drives bearing the data that spotted the black hole with one of Margaret Hamilton, the MIT computer scientist who helped write the software for the Apollo program.

The orginal article.