Since
the Cambridge Analytica scandal erupted in March, Facebook has been
attempting to make a moral stand for your privacy, distancing itself
from the unscrupulous practices of the U.K. political consultancy.
“Protecting
people’s information is at the heart of everything we do,”
wrote Paul Grewal, Facebook’s deputy general counsel, just a few
weeks before founder and CEO Mark Zuckerberg hit Capitol Hill to make
similar reassurances, telling lawmakers, “Across the board, we
have a responsibility to not just build tools, but to make sure those
tools are used for good.”
But in
reality, a confidential Facebook document reviewed by The Intercept
shows that the two companies are far more similar than the social
network would like you to believe.
The
recent document, described as “confidential,” outlines a new
advertising service that expands how the social network sells
corporations’ access to its users and their lives: Instead of
merely offering advertisers the ability to target people based on
demographics and consumer preferences, Facebook instead offers the
ability to target them based on how they will behave, what they will
buy, and what they will think.
These
capabilities are the fruits of a self-improving, artificial
intelligence-powered prediction engine, first unveiled by Facebook in
2016 and dubbed “FBLearner Flow.”
One
slide in the document touts Facebook’s ability to “predict
future behavior,” allowing companies to target people on the
basis of decisions they haven’t even made yet. This would,
potentially, give third parties the opportunity to alter a consumer’s
anticipated course. Here, Facebook explains how it can comb through
its entire user base of over 2 billion individuals and produce
millions of people who are “at risk” of jumping ship from one
brand to a competitor. These individuals could then be targeted
aggressively with advertising that could pre-empt and change their
decision entirely — something Facebook calls “improved
marketing efficiency.”
Full
report:
Read
also:
Comments
Post a Comment