Summary of “A Reality Check for IBM’s AI Ambitions”

“Watson is a joke,” Chamath Palihapitiya, an influential tech investor who founded the VC firm Social Capital, said on CNBC in May. However, most of the criticism of Watson, even from M.D. Anderson, doesn’t seem rooted in any particular flaw in the technology.
It still seems likely that Watson Health will be a leader in applying AI to health care’s woes.
In 2015, the Washington Post quoted an IBM Watson manager describing how Watson was busy establishing a “Collective intelligence model between machine and man.” The Post said that the computer system was “Training alongside doctors to do what they can’t.”
To really help doctors get better outcomes for patients Watson will need to find correlations between what it reads in health records and what Tang calls “All the social determinants of health.” Those factors include whether patients are drug-free, avoiding the wrong foods, breathing clean air, and on and on.
Even M.D. Anderson, despite the fate of the Watson project, is continuing a large program that began around the same time, focused on gathering 1,700 types of clinical data on every patient who walks in its doors.
Y Futreal, the scientist who runs the program, says combining that patient information with research data will be crucial for the sorts of capabilities that systems like Watson could provide.
On the drug-discovery front, Watson Health is working with the Barrow Neurological Institute, where Watson helped find five genes linked to ALS that had never before been associated with the disease, and with the Ontario Brain Institute, where Watson identified 21 promising potential drug candidates.
Will Watson eventually make a difference in improving health outcomes and lowering costs? Probably, says Stephen Kraus, a partner at the VC firm Bessemer Venture Partners who focuses on health care and has invested in AI health-care startups.

The orginal article.

Summary of “How Analytics Are Used in the NFL”

SMARTER FOOTBALL WEEK: A series examining the cerebral side of the sport, including technology, analytics, how a brainy linebacker prepares and just what goes into a typical NFL play.
So here’s where you start: The reason there isn’t an NFL team ignoring analytics is because analytics has been done in football since Paul Brown came along.
How is that possible? Well, as Jaguars SVP of football technology and analytics Tony Khan-the son of owner Shahid Khan-explains it: “The adoption rate is far behind other sports.” More than three-quarters of NFL teams employ either a director of analytics or have a full-blown analytics department.
“So let’s say you’re playing Cincinnati, and you want to look at their tendencies when they’re in base personnel. You might wind up with 40 snaps out of 280. And then you’ll make a judgment. Well, of those 40, how many were on third down? How many came on second down?”.
HOW PRO FOOTBALL FOCUS CAME TO BE: In 2015, Jenny Vrentas told the story of how an Englishman who never played the game abandoned a profitable business to run an NFL advanced stats website.
“There are very, very few examples of an NFL player who produced a lot of sacks that wasn’t able to run a 10 time around 1.6,” says Banner, who established an analytics department in Philly in 1995.
Along those lines, ex-Browns GM Phil Savage used to send his scouts out for school visits after they were in-house for training camp with the warning: Now, remember, you were just watching NFL players.
As Kelly’s teams used it, individual profiles were built on players to provide coaches a roadmap for how hard guys were working, how far they could be pushed, and when they were at risk to suffer soft-tissue injuries.

The orginal article.

Summary of “How to Integrate Data and Analytics into Every Part of Your Organization”

The stakes are high, with International Data Corporation estimating that global business investments in D&A will surpass $200 billion a year by 2020.
D&A should be the pulse of the organization, incorporated into all key decisions across sales, marketing, supply chain, customer experience, and other core functions.
What’s the best way to build effective D&A capabilities? Start by developing a strategy across the entire enterprise that includes a clear understanding of what you hope to accomplish and how success will be measured.
One of the major American sports leagues is a good example of an organization that is making the most of its D&A function, applying it in scheduling to reduce expenses, for example, reducing the need for teams to fly from city to city for games on back-to-back nights.
Some organizations have D&A capabilities spread across functions, or rely on a few data scientists to provide insights.
In our experience, companies that build a D&A capability meeting their business needs have teams of data and software engineers who are skilled in the use of big data and data scientists who are wholly focused on a D&A initiative.
While structures vary, the team should be seamlessly integrated with the company’s existing providers and consumers of D&A, operating in cohesion with non-D&A colleagues – people who really understand both the business challenges and how the business works – to set and work toward realistic and relevant strategic goals.
In an age where data is created on a scale far beyond the human mind’s ability to process it, business leaders need D&A they can trust to inform their most important decisions – not just to reduce costs but also to achieve growth.

The orginal article.

Summary of “Five building blocks of a data-driven culture”

Businesses must pay for data collection and cleansing, hosting and maintenance, salaries of data engineers, data scientists and analysts, risk of breach and so on.
To be data-driven requires an overarching data culture that couples a number of elements, including high-quality data, broad access and data literacy and appropriate data-driven decision-making processes.
A large Fortune 100 financial conglomerate that hires data scientist from The Data Incubator’s fellowship is able to maintain a competitive edge in hiring compared to “Sexy” Silicon Valley companies like Google, Facebook and Uber, partially through granting broad access to data for their data science team.
The access doesn’t just stop at data scientists – one of the products our alumni have worked on is building summary dashboards that automatically gives customer service reps a visualization of the interaction history of the customer on the phone.
Data-driven organizations need to foster a culture whereby individuals know what data are available – a good data dictionary and generally seeing data being used in day-to-day decision making helps – and, further, that they feel comfortable requesting access, if they have genuine use case.
One of the quickest data wins for many of our clients simply comes in training people who are half-way to becoming data scientists on the other half.
Enterprises have begun to view data literacy training as necessary for everyone, and we’ve seen the demand for “Introductory data science for managers” courses double in the last 12 months.
Second, data analysts and data scientists then need to agree on the data dictionary and what the data means.

The orginal article.

Summary of “The RNC Files: Inside the Largest US Voter Data Leak”

The data, which was stored in a publicly accessible cloud server owned by Republican data firm Deep Root Analytics, included 1.1 terabytes of entirely unsecured personal information compiled by DRA and at least two other Republican contractors, TargetPoint Consulting, Inc. and Data Trust.
Deep Root Analytics, TargetPoint, and Data Trust-all Republican data firms-were among the RNC-hired outfits working as the core of the Trump campaign’s 2016 general election data team, relied upon in the GOP effort to influence potential voters and accurately predict their behavior.
The RNC data repository would ultimately acquire roughly 9.5 billion data points regarding three out of every five Americans, scoring 198 million potential US voters on their likely political preferences using advanced algorithmic modeling across forty-eight different categories.
Deep Root Analytics, the Republican data firm which created and maintained the exposed data warehouse, was co-founded in 2013 by Alex Lundry, a Republican campaign data scientist who had served as data director in Mitt Romney’s unsuccessful 2012 presidential campaign.
Among these private consultancies was Data Trust, a Washington-based firm that claims to “Continually develop a Republican and conservative data ecosystem through voter file collection, development, and enhancement.”
Data Trust, “The GOP’s exclusive data provider,” was created by the RNC in 2011, per National Review, “To shoulder the cost of building and managing the GOP’s voter file”-its repository of detailed voter information crucial to any successful electoral advertising and get-out-the-vote efforts.
“In this case, the people doing most of the data modeling and voter scoring – especially for field operations, voter contact and television advertising – were from a collective of three data firms hired by the RNC: TargetPoint Consulting, Causeway Solutions, and Deep Root Analytics, which officially worked with the RNC through a new subsidiary called Needle Drop.”.
The same factors that have resulted in thousands of previous data breaches-forgotten databases, third-party vendor risks, inappropriate permissions-combined with the RNC campaign operation to create a nearly unprecedented data breach.

The orginal article.

Summary of “How artificial intelligence is revolutionizing customer management”

A few years back, cloud computing transformed customer management, giving every small and medium business access to unified data and communication platforms without the need to make heavy investments in IT infrastructure and staff.
This time around, the next revolution in the space is being driven by artificial intelligence algorithms that help businesses automate customer outreach and make optimal use of data.
AI-powered tools are now helping scale the efforts of sales teams by gleaning useful patterns from data, finding successful courses of action, and taking care of the bulk of the work in addressing customer needs and grievances.
Main providers of Customer Relationship Management solutions have started to invest in the added value of AI. Last year, SalesForce, the leader in the CRM industry, announced Einstein, an AI assistant that, when launched, will be omnipresent across its platform.
The AI Offers app merges data from the company cloud and the Oracle Data Cloud to extract contextual insights into individual customer behaviors and provide personalized offers as visitors browse websites powered by the Commerce Cloud.
Over time, as these solutions continue to process company and customer data, they become more efficient in their functionality.
Amelia is a virtual customer assistant that uses natural language processing to understand customer queries and provide answer based on data gathered from previous interactions and the company knowledge base.
AI-powered customer support and management will surely result in more satisfied and less frustrated customers, and more productive sales teams.

The orginal article.

Summary of “The Best Advice We Overheard at First Round’s CTO Unconference”

To that end, the CTO Unconference downplayed keynotes and prized key nodes, including the CTOs, VPEs and Engineering Directors from companies like Apple, Google, Dropbox, Slack and Blue Apron.
40% of attendees were CTOs, 34% were VPEs and Directors of Engineering and 26% were rising-star engineering leads, manager, CIOs and CPOs.
What emerged is advice you’d seldom hear elsewhere – guidance that we thought could help other company builders out there.
So in honor of the one person who listed Cool Runnings as a favorite movie, we hope the following advice mined from our Unconference helps you “Feel the rhythm” and “Feel the rhyme” in your companies.
For every engineering manager role, have the candidate sit with a member of the engineering team and play out a scenario 1:1.
Of course, using your engineers’ time like this may seem expensive, but it’s more costly to bring on an engineering leader who doesn’t jive with your team.
A lot of companies use CultureAmp in particular to keep tabs on how engineers are feeling about their work.
If you run a service where outages are fairly frequent and mission critical, start measuring how long it takes your engineering teams to get things back up and running.

The orginal article.

Summary of “3 Things Are Holding Back Your Analytics, and Technology Isn’t One of Them”

During the past decade, business analytics platforms have evolved from supporting IT and finance functions to enabling business users across the enterprise.
We’ve found three main obstacles to realizing analytics’ full value, and all of them are related to people, not technology: the organization’s structure, culture, and approach to problem solving.
At the same time, the group eschewed traditional business norms such as checking in with clients, presenting results graphically, explaining analytic results in the context of the business, and connecting complex findings to conventional wisdom.
At one extreme, we see analytics groups that create overly complex models with long lead times and limited adaptability to changing inputs.
In light of these obstacle, we believe an effective business analytics organization balances functional knowledge, business instinct, and data analysis, with an operating philosophy to add complexity only when the additional insights justify it.
Knowing the languages of analytics and business, the embedded generalists also serve as liaisons between the independent data scientists and the business partners in their functions.
Heading up the nerve center? A chief analytics officer who brings the voice of analytics straight to the C-suite, where instinct tends to rule.
Modern business analytics has made it possible to extract new types of insights from vast volumes of data.

The orginal article.

Summary of “Why Apple is struggling to become an artificial-intelligence powerhouse”

SAN JOSE – In 2011, Apple became the first company to place artificial intelligence in the pockets of millions of consumers when it launched the voice assistant Siri on the iPhone.
Six years later, the technology giant is struggling to find its voice in AI. Analysts say the question of whether Apple can succeed in building great artificial-intelligence products is as fundamental to the company’s next decade as the iPhone was to its previous one.
“Artificial intelligence is not in Apple’s DNA,” said venture capitalist and Apple analyst Gene Munster.
At Apple’s annual developers conference Monday – the same event where Siri was introduced – the company’s efforts to become an AI powerhouse were on display as executives launched a new stand-alone smart speaker and touted features meant to boost Siri’s chops and to power AI applications on Apple products.
Sales of the iPhone propelled Apple to become the most valuable company in the world and still account for more than half of the company’s revenue, which was $215.6 billion in 2016.
In December, Apple presented and published its first academic paper on artificial intelligence at an industry conference.
Researchers at elite universities said in interviews that Apple was still not the top choice for their computer science graduates – Google, Facebook and Amazon were by far the top picks – but that the company was moving up in the rankings.
Last year, as Apple began to embrace artificial intelligence on the iPhone, the company undertook a large privacy protection project.

The orginal article.

Summary of “IBM wants to accelerate AI learning with new processor tech”

So why does it take so much computing power and time to teach AI? The problem is that modern neural networks like Google’s DeepMind or IBM Watson must perform billions of tasks in in parallel.
That requires numerous CPU memory calls, which quickly adds up over billions of cycles.
The researchers debated using new storage tech like resistive RAM that can permanently store data with DRAM-like speeds.
They eventually came up with the idea for a new type of chip called a resistive processing unit that puts large amounts of resistive RAM directly onto a CPU.Google’s Deepmind AI topples Go champ Lee Seedol.
Such chips could fetch the data as quickly as they can process it, dramatically decreasing neural network training times and power required.
The scientists believe its possible to build such chips using regular CMOS technology, but for now RPU’s are still in the research phase.
The technology behind it, like resistive RAM, has yet to be commercialized.
Building chips with fast local memory is a logical idea that could dramatically speed up AI tasks like image processing, language mastery and large-scale data analysis – you know, all the things experts say we should be worried about.

The orginal article.