One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.
“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.
All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”
So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.
A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.
Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.
Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.
The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a videoconference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.
That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.
The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.
The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.
This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)
The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.
In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.
By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.
The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.
But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.
Two years that forced the platform to change
by Blanca Myers
Facebook suspends Benjamin Fearnow, a journalist-curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.
Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.
Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.
Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.
Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.
Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.
Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.
Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.
Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.
Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”
In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.
So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”
It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”
This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.
And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.
In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.
In February of 2016, just as the Trending Topics fiasco was building up steam, Roger McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”
But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.
Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.
The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.
Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.
According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.
The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.
Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”
Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.
The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.
Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.
Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.
Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)
Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.
When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.
But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?
The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary."
While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-marketing political operation in history.
In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.
Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Lookalike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.
Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.
Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.
Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”
It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.
Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-envelope calculation showing that fake news was a tiny percentage of the total amount of election-related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.
Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”
A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”
At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.
One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.
Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.
Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.
In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.
Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”
Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”
Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.
Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.
“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”
As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”
The next month, DiResta published an article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.
The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.
Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.
And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.
Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.
This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.
As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.
Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.
One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”
That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.
During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)
On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”
One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.
Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.
Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.
When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”
Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.
This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.
To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”
McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.
And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”
After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” h
Similarly, Valentine’s day gets a lot of shit for being a “fake” or “commercial” holiday, which is just as dumb (what’s Arbor Day done for you lately, anyway?). Oh no, a “holiday” where we celebrate by expressing our love for each other and exchanging tokens of affection, however will we survive? The point is, if you have a man in your life (whether a steady boyfriend or as the result of “cuffing season,” which I maintain is not real), it’s ok if you want to buy each other presents. Valentine’s day is tough because no one wants to attach much meaning to it, so the endless sea of generic gifts can be a lot to sort through. I’ve pulled out a few suggestions, and helpfully broke them out by how long you’ve been dating (and therefore, price). As usual when I do one of these, I offer one nice thing you can do for him, plus a couple of actual gifts. You’re welcome.
Stop Being Suspicious Of His Ex
Look, he’s dating you and not her, ok? He chose you. As long as he’s stopped starting every other sentence with “Well, Madysynn and I used to…,” it’s time to give it a rest. She’s not gonna stop following him on Insta until she’s good and ready, and him blocking her would be insane. Let him bitch about her when he’s drunk if he needs to; it’s part of the healing process. Just don’t pile on yourself. Remind him of why he’s better off with you, not worse.
We all carry a bunch of shit that needs electricity nowadays, but Congress has repeatedly ignored my motions for a public infrastructure initiative that would install wireless charging modules under every American street and sidewalk (I see no flaw in this plan). Until they come to their senses, it’s nice to have a little insurance policy in the form of a portable charger. I’d rather eat my own big toe than watch my phone switch into low power mode.
Everyone’s a fucking foodie now, aren’t they? I feel like that’s just how it is—either cooking is as foreign and unknowble to you as an uncontacted Amazonian tribe, or you spend your free time rage-commenting over how to properly make your own lump charcoal in your back yard. But if your man is One Of The Good Ones (i.e., into cooking but not an asshole about it), this book is a perfect gift. It’s less a collection of recipes, and more a nerdy scientific explanation of why shit tastes better in restaurants than it does at home—and how to bridge the gap. The Food Lab website has been my go-to when I want to be a snobby, know-it-all food jerk for a while now, and the book is even more in-depth. Plus, he can use it to make you nice things, so that’s a win-win.
Give Him A Goddamn Drawer
Assuming you’re not a couple of lunatics who’ve already shacked up/gotten married by now, odds are you’re not living together, but you spending most of your time at each others’ abodes. That’s fine, but do you know how annoying it is to have to pack up a bag every time you want to spend a night Netflixing (chilling optional)? It sucks! Most likely, you’re spending most of your time at your place because you probably have your shit a little more together, whereas he might still live in a house with like six other bros that should have been condemned during the Clinton administration. Clear out maybe a dozen of your old sorority philanthropy t-shirts, and make space for him to keep some pajamas, a work outfit, a weekend outfit, and some basic toiletries. I’d say he should do the same for you, but you know damn well you planted a flag in your drawer after like the second week you were dating.
For the forseeable future, it looks like men are going to have to pretend that whiskey is something we enjoy drinking. It’s not that I can’t appreciate a decent scotch or whatever, but like, have you ever even had a gin and tonic? Much more refreshing. Anyhow, for as long as we have to keep up this charade, these glasses greatly improve the whiskey drinking experience by keeping his grubby paws from warming it up. Ice won’t melt as fast if he uses it, and if not, the room temperature hooch will remain… room temperature. It’s pretty weird when that’s the nicest thing you can say about something that’s twice the price of gasoline but tastes basically the same.
Broke: Fucking so loud your neighbors can hear you. Woke: Fucking so loud can hear you. Of all the creepy, always-listening Echo products, the Dot presents the best value. It does all the things the more expensive ones can do, and provided he has another, nicer Bluetooth speaker, its relatively weak sound quality shouldn’t be a problem. By simply speaking out into the ether like a crazy person, he can use it to check the weather, stream music, look up sports scores and even order shit like Ubers and takeout food. I use mine maybe once per month, but the technology is a little cooler and more advanced than I give it credit for. Signing your entire life over to one of a handful of mega corporations is somehow one of the dystopian futures facing us (given current events), so you might as well embrace it.
Nine Months +
Couples who’ve reached this stage are really annoying, because they want so desperately to be taken seriously as a couple despite not being married, or even engaged. Things like moving in together or getting married may indeed be on the horizon, however, which means you’re involved in Very Adult Things like planning your career tracks, looking at houses/apartments and like, talking about your retirement accounts or whatever. The point is, with stability and familiarity comes routine. And while that’s necessary and not a bad thing in and of itself, it’s not exactly fun. Think back to when you started dating—you were probably doing shit in bed that would get you arrested in most NATO countries. You don’t have to turn into a porn star, but throwing on something a little flirtier and sexier than your go-to baggy t-shirts and sweats could deliver a big ROI. Just don’t let him pick it out. All the underwear will be crotchless and putting it all on will require more time and assistance than a medieval knight’s armor.
One mark of getting older and wiser is accumulating nicer versions of all the shit you never even considered important when you were younger. If he’s been carrying to the gym the same ugly, smelly company-issued bag he got at orientation five years ago, he’ll really appreciate an upgrade. It’s plenty roomy, but it’s still structured so it doesn’t flop around the way your boyfriend does when he’s playing pickup basketball. Plus, it looks nice enough to use as carry-on luggage without looking like a hobo. I don’t have this, but I have some other stuff from Herschel and can testify that it seems nice enough to justify paying these prices for a canvas sack.
I can personally attest that getting rid of my bulky, obnoxious wallet was one of the best things I ever did. There’s just no reason for it, because probably 90% of the shit we carry in there is shit we don’t need. But men are creatures of habit, so most of us will continue to give ourselves sciatica and develop that white-collar skoal ring in the rear pockets of all our pants. This Bellroy wallet still holds everything he needs (if not more): Up to 12 cards and a little bit of cash folded in the middle. I mean, how many cards does one man need to carry? I’m thinking a debit card, credit card, subway pass, ID badge, insurance card, and (maybe) a corporate credit card. That’s six. If your boyfriend carries more than that, I have some bad news about your financial future. Similarly, if he carries around more than a few bills in cash, he’s either a drug dealer or he thinks the government is tracking his purchases, neither of which are optimal. Either way, this will hold all his shit while staying slim enough to not ruin the line of his suit. Jk, we both know you’re not dating the kind of guy who wears a suit often (or ever).
Washington (CNN)President Donald Trump has asked for a military parade and the Pentagon is reviewing potential dates, Pentagon spokesman Charlie Summers said Tuesday.
One thing most of us can agree on is that when possible, we should protect animals.
After all, they’ve been existing without us interfering heavily in their environment for millions of years. As our technology advances, we become more and more of a threat to them, so placing limits on what we find acceptable is the humane thing to do. While we hear a lot about polar bears and other mammals who are affected by our actions, we hear less about birds.
Birds are highly impacted by the things we do in the air, but the Trump administration just took away a key protection that’s saved countless lives.
The Migratory Bird Treaty Act was implemented back in 1918 to prevent the killing of birds in large numbers. Under previous interpretations of the law since the 1970s, birds killed in oil spills or by wind farms would result in fines for the corporations. This encouraged the companies to find ways to prevent loss of life.
Under the Trump administration’s new interpretation of the law, passed down in late 2017, now fines will only be issued for birds killed by intentional hunting. Critics fear that oil drilling, wind power, and communications towers will now disregard features implemented to protect birds.
The Environmental Protection Agency often worked with companies to come up with solutions to problems that killed birds. Communications towers began using flashing lights, and fishing boats weighted their nets. These regulations may have cost a small amount but they saved lives.
During the Obama administration, Duke Energy and PacifiCorp Energy were both prosecuted under this act after they failed to implement safety measures at their wind energy farms. Now, neither would be held liable.
Almost every Interior Department official since the 1970s, across party lines, has signed a letter denouncing the decision. This legal opinion is contrary to the long-standing interpretation by every administration (Republican and Democrat) since at least the 1970s, who held that the Migratory Bird Treaty Act strictly prohibits the unregulated killing of birds, the letter states.
The Trump administration, however, is standing by its decision, calling the way the law had been interpreted in the past “totalitarian.” It seems now we’ll have to rely on companies to just do the right thing without having to suffer any financial penalties. What could possibly go wrong?
(via Mother Jones)
Read more: http://www.viralnova.com/trump-birds/
DoJ indictment alleges Russian operatives communicated with unwitting individuals associated with the Trump campaign
Congressional negotiators finalized an immigration deal on Thursday that would codify legal protections for undocumented minors while giving President Donald Trump some tangible victories of his own.
The deal, which was forged in the Senate, still faces major hurdles; chief among them a healthy skepticism from conservativesincluding some officials in the White Houseand a difficult path to passage in the House. But one lawmaker told The Daily Beast that there was a growing sense of optimism that negotiators had crafted an agreement that could, at a minimum, pass the Senate and put pressure on the House to act.
Congressional sources and lawmakers were universally cagey in discussing the specifics, but after The Daily Beast reported on the the basic contours of the agreement, Sen. Jeff Flake (R-AZ) confirmed the details that were initially described by aides and lobbyists privy to the negotiations.
The final deal will codify the legal protections for so-called DREAMers that Trump rescinded when he ended the Deferred Action for Childhood Arrivals (DACA) program that began under his predecessor. The deal also seeks to undo another Trump decision: the termination of the Temporary Protected Status (TPS) designation for certain groups of immigrants, including Salvadorans, 200,000 of whom could face an end to their legal status in 2019.
In an interview Thursday afternoon, Flake confirmed those details to The Daily Beast. He said the DACA fix agreed to as part of the deal includes a pathway to citizenship for DREAMers. Additionally, they will have to wait 12 years from the time they are awarded protected status before they can gain citizenship, according to a congressional source familiar with the negotiations, and they can earn up to two years of credit for any time spent as a DACA beneficiary. Groups of immigrants such as Salvadorans would be able to access the diversity visa lottery program if they have Temporary Protected Status, according to Flake.
In exchange for backpedaling two of his initiatives, Trump would score some real policy victories. The deal will revamp but not end the visa lottery program, numerous sources said. The formula for so-called chain migrationthe policy whereby lawful permanent residents can sponsor immigrants to the U.S.was re-drawn in a way that alters the ability of those DREAMers to sponsor their relatives for legal status.
The covered populationthe parents who brought their kids across the border illegallywould not be able to access a citizenship track by virtue of their children, Flake said. So weve cut chain migration with regard to the covered population.
Negotiations are ongoing about how to strengthen the e-Verify system that allows businesses to determine the legal status and eligibility of the workers they hire. The deal will, regardless, include a border security investment of at least $1 billion for new technology. Democrats have expressed a willingness to accept such an arrangement, provided it was geared toward new border monitoring technologies.
The final deal will also include money for some sort of physical structure along the southern border. Just what that structure will be called and what it will resemble is still unclear. Trump has demanded funding for a wall, though he has backed off of his insistence that it be a coast-to-coast concrete barrier funded by the Mexican government. Democrats, and some Republicans, are adamantly opposed to a wall being built.
In somewhat typical Washington fashion, negotiators were working to find the right nomenclature to satisfy both sides. Less difficult, though, is figuring out how much money will go to a border structure. The lobbyist said it would be well below the $18 billion that Trump has requestedand one lawmaker involved in the negotiations said just $1.6 billion was allocated.
As is always the case with such high-stakes negotiations, the possibility of failure remains real. Flakes working groupwhich includes Sens. Lindsey Graham (R-S.C.), Dick Durbin (D-IL), Bob Menendez (D-N.J.), Michael Bennet (D-CO) and Cory Gardner (R-CO)was already facing resistance on Thursday from their fellow lawmakers, who suggested that Trump was not on board with the proposal.
The main negotiations have been spearheaded by a bipartisan group of senators. But after a meeting at the White House this week, the whips of each chamberSens. Durbin and John Cornyn (R-TX) as well as Reps. Steny Hoyer (D-MD) and Kevin McCarthy (R-CA)started their own working group.
The White House has been working alongside that unit. But there is also tangible concern among Democrats and some Republicans on Capitol Hill that the administrations involvement might hamper matters more than they help.
In particular, aides have been wary that policy adviser Stephen Miller, a noted immigration hardliner, has been back-channeling with conservative lawmakers to demand tougher interior enforcement measures as part of any final product. Senators involved in the negotiations, notably Flake, have argued that Trumps instincts are in the right place when it comes to saving DACA, despite the advice he receives from those around hima thinly veiled reference to Miller.
So far, White House chief of staff John Kelly has run point in talks with congressional officials. But Marc Short, the White House Director of Legislative Affairs, told The Daily Beast that Miller was not out of the loop.
We think he knows the issue of immigration better than anyone, Short said, noting that Miller was a key aide on the policy for then-Sen. Jeff Sessions (R-AL). Because of that, if there is a deal that can be done with Democrats, Stephen knows where and how.
The next step for the Senate working group, the lawmakers said in a statement, is to build support for the deal in Congressa task that is shaping up to be a tall order.
Officials in the administration were notably less bullish than congressional aides and outside advocates about the possibility of a deal passing both the House and Senate. One senior White House official said that the scope continues to narrow but we still are a ways off. There is also a broader fear that anything agreed upon in the Senate would ultimately die in the House, where Republican lawmakers are far less likely to be comfortable with any legislative product that codifies DACA and does not include more funding for a border wall.
And, in recent days, other developments have suggested that bipartisan progress may be limited to the Senate. On Wednesday, Reps. Bob Goodlatte (R-VA) and Michael McCaul (R-TX) introduced a separate immigration bill that contains more strict border enforcement measures in return for codifying DACA. It was widely viewed as an effort to muddy the waters of other congressional negotiations by setting a far harder line for what House Republicans should demand in any deal. Trumps support for the measure complicated matters even further.
The presidents support of the bill will make this deal dead on arrival in the House, a senior House GOP aide told The Daily Beast. Instead of jamming a Democrat-hatched deal down members throats, House Leadership should use the committee process and regular order to pass something that reflects the will of the American people.
But Trumps support for Goodlatte-McCaul was notably nuanced. In a statement following that legislations introduction, the White House only publicly praised four components of the bill rather than its total scope. Those four components are all within the purview of the Senate-dominated DACA deal talks.
It has to get 60 votes, Flake said. Were the only bipartisan deal in town.
Nothing makes you more of a nuisance than sharing random trivia without any context. Broadening our knowledge about the world, however, is something we can’t avoid. It’s too interesting and too easy to get lost in. To teach you everything you need to know to bore people to death we’ve compiled some of the most surprising truths from Fact Republic.
From a carrier pigeon racing with internet technology to a badass Titanic passenger, the things on this list will surely spark your interest in random trivia. Scroll down to learn a thing or two and upvote your favorite entries! If, however, this series doesn’t ease your thirst for knowledge, check out these 30 happiest random facts as well!
Why are we still driving non-flying cars to our non-space workplaces while fantasizing about our merely two-boobed prostitutes? Where are all the snazzy gadgets and awesome technologies movies promised us? In many cases, they’re right here. We just don’t use them because, well, they kinda suck. Like how …
Controlling Computers With Hand Gestures Is Awful
Controlling Computers With Hand Gestures Is Awful
In Minority Report, Tom Cruise plays a future cop who tries to warn everyone that Max von Sydow is evil, but no one will believe him, even though he’s clearly Max von Sydow. But what most people remember best are the scenes wherein Cruise controls his futuristic crime lab computer by waving his arms around.
How cool is that? Instead of having to say “enhance” and then clicking a boring old mouse, Cruise picks up files and videos from the air itself, and explores them using simple gestures. Soon, other movies were jumping in on this hot futuristic action. From Iron Man 2 …
… to Prometheus …
20th Century Fox
… to Star Trek: Discovery.
CBS Television Studios
Why We’re Not Using This Today:
As everyone who has ever owned a Kinect knows, this crap gets old fast. The biggest issue is that your arms get tired very quickly if you hold them up for even a short period of time. If you make that a long time, the feeling gets absolutely excruciating. Engineers actually identified this problem in the ’80s, and even gave it a name: the “gorilla arm” effect. You know, because your arms get “sore, cramped, and oversized,” and you end up looking and feeling like a gorilla. Not even a cool sci-fi cyborg gorilla like in Congo.
Take another look at that Minority Report scene. When Cruise goes to shake Colin Farrell’s hand, he accidentally moves a bunch of files he’s working on. That would happen all the time. Imagine you’re holding 350 slides that took you five hours to organize and you suddenly get an itch on your butt:
20th Century Fox
Any interface that lies flat and gives you a wide range of control — even if you only move your hands a few inches — would beat this thing … hands down. If only we had something like that!
Sci-Fi Holograms Are Inferior To 2D Images In Almost Every Way
Sci-Fi Holograms Are Inferior To 2D Images In Almost Every Way
If somebody in a sci-fi movie needs to look at something important, a paltry two dimensions simply will not do. They need holograms for absolutely everything, even when audio alone would do the job. Like in Star Wars, when R2-D2 shows Leia’s holographic recording to a horned up Luke:
Here it is again in The Last Starfighter:
And here’s a dude’s head popping out of a monitor on Star Trek: Discovery:
CBS Television Studios
Hell, even the highly advanced race of spacefaring giants who created mankind love holograms! From Prometheus:
20th Century Fox
Why We’re Not Using This Today:
You may have noticed something about the holograms above: They A) look like crap, B) are completely pointless, or C) both. That pretty much sums up holograms in the real world, too. Remember that time Tupac’s blue ghost crashed a Snoop Dogg performance? And remember how the company responsible went bankrupt soon thereafter? Turns out there isn’t much real use for blurry, semi-transparent 3D projections that cause eye strain if you look at them for too long.
Even the nicest example is so fuzzy and transparent that it’s not clear why you would bother with it over a 2D video feed. In the 2017 Ghost In The Shell, a hologram is used to reconstruct a murder scene, but it’s so imprecise (red tint, kinda blurry, semi-transparent) that it’s hard to think of a use for it other than making up for the investigator’s chronic lack of imagination.
In Prometheus (again!), the Weyland Corporation’s holograms don’t have a tint, but they’re so transparent that everyone on the crew probably ended up with a migraine anyway.
20th Century Fox
If you absolutely need to communicate visual information over a vast distance, why would you choose this technology? Think of the bandwidth charges! We already know the future doesn’t have Net Neutrality.
Nobody Likes Video Calls (Except In The Movies)
Nobody Likes Video Calls (Except In The Movies)
With the possible exception of flying cars and sex-bots, no technology shows up in sci-fi movies as often as video calls. Whether they’re discussing something of galaxy-shattering importance or reminding their spouse to buy eggs, everybody in the future does everything via video calls. We see it in …
Warner Bros. Pictures
… and like a million other movies. We’ll stop now, or we’ll be here all day.
Why We’re Not Using This Today:
We are! Video calling is finally a reality! And it sucks. Seriously, unless it’s for Twitch streaming, nobody uses it. And it’s easy to see why.
You can take voice calls in almost any situation where you can talk, but if you take a video call, you have to look like a decently dressed, reasonably groomed human being. Plus, you have to make sure you didn’t leave something like, say, a giant pink dildo visible in the background. Which has happened. On the BBC.
And yet sci-fi characters love this technology so much that they’ll literally risk their lives to use it. In 2017’s Valerian And The City Of A Thousand Planets, right as the characters are leaving a planet’s orbit, the face of their boss pops up smack dab in the middle of their ship’s front viewport. That could kill you while you’re driving a car, let alone piloting a spaceship.
Super Advanced Robots Always Have Needlessly Terrible Vision
Super Advanced Robots Always Have Needlessly Terrible Vision
One of the coolest types of shots is when we go inside a robot’s head to see the way they look at the world. Like in the Terminator movies, in which Arnold Schwarzenegger sees everything through a badass red filter, with a bunch of important-looking numbers and text readouts:
Or the recent RoboCop remake, where the Robo-Vision (that’s the official name, look it up) shows everything in an old-timey reddish sepia tone, with, again, added text and data prompts:
Why We’re Not Using This Today:
Look at any decent first-person shooting game. The status bars and prompts are always minimal and in the corners of the screen. If they took up 30 percent of your monitor, like in the examples above, the developers would have angry nerds with actual guns outside their houses. All those big letters and numbers are covering up important visual information, allowing AmishTeabaggz42069 to sneak up and shoot you in the head. And what are they even there for? Terminators have computers for brains. Why do they need to see the data they themselves are processing?
On top of that, the obligatory red tint makes these killer robots effectively colorblind, and prevents them from easily distinguishing between, say, blood and other liquids, which you’d think would be important in their line of work. At the other end of the spectrum, we have medical robots like Baymax from Big Hero 6, whose internal HUD looks like this:
Walt Disney Pictures
All those widgets are probably helpful for a robot that patches up humans, but that blue tint … isn’t. Baymax needs to see his patients as accurately as possible, not just to identify any physical symptoms, but also to make treatment easier. It’s been demonstrated that blue light hinders injections, since it’s harder to find a vein under the patient’s skin.
Meanwhile, in Chappie, the law-enforcing robots that patrol the streets are all apparently equipped with crappy late ’90s webcams. Imagine trying to shoot the correct criminal if this was what you saw:
To be fair, all these examples are still an improvement over 1973’s Westworld, wherein the highly advanced Yul Brynner robot, whose sole purpose is to shoot people in gunfights, can’t even tell a fork from a spoon.
Computer Screens In Science Fiction Movies Are Worse Than The Ones We Have Today
Computer Screens In Science Fiction Movies Are Worse Than The Ones We Have Today
In sci-fi movies, computer screens are elaborate displays of carefully matched colors and captivating animations (even when no one’s using them). They’re all packed with graphs and numbers and all sorts of doubtlessly essential information. Marvel at the snazzy monitors in 2009’s Star Trek …
… and Avatar …
20th Century Fox
… and naturally, good ol’ Prometheus:
20th Century Fox
Why We’re Not Using This Today:
We lose ten minutes of work time every time a pigeon lands outside our window. If you had to do your job next to a bunch of huge screens that kept looping through colorful graphics, you’d probably get quite distracted. And if your own screen insisted on performing a lovely animation every time you updated some data or asked for an analysis, you’d probably start daydreaming about Microsoft Excel for the first time in your life.
In almost every sense, these sci-fi screens are a huge step backwards compared to what we have now. Nearly all of them have low contrast (making it harder to read things at a glance) and a grand total of four colors, all of which are usually variations of blue and green. The Avengers:
Mars (a National Geographic miniseries):
20th Century Pictures
Not only does this mean that you run out of ways to highlight important stuff quickly, but the preponderance of blue and lack of red tones can even be dangerous. See, when your eyes have adapted to a dark environment, light of any color except red will disrupt that adaptation. This is called the Purkinje effect. That’s why interfaces for things like submarines and airplanes use a lot of red, which allows, for example, pilots flying at night to clearly see both the screen and the view outside their cockpit. But on the other hand, blue looks neater, so that’s a fair tradeoff.
These sci-fi screens fail at the most basic function of a user interface: conveying information quickly and easily. Everything important is hidden in dense blocks of tiny text and numbers scattered around the screen. The only way the following screenshots make sense is if the characters have superhuman vision or magnifying glasses:
20th Century Pictures
For comparison, here is a real-life NASA mission control room:
Note the lack of flashy animated visualizations. The multiple high-contrast colors. The text that is readable when you’re at the intended distance. And Earth has yet to be attacked by alien invaders. Coincidence? We don’t think so.
Prometheus isn’t a bad movie, but please make sure you’ve seen Alien before watching Prometheus. We talk about that movie a lot on this site too.
If you loved this article and want more content like this, support our site with a visit to our Contribution Page. Please and thank you.
A while back, we made a post about people who were perusing art galleries and museums and spontaneously stumbled upon their doppelganger, in fine art form.
Perhaps inspired by this, Google have decided to add a feature to their Arts & Culture app that uses facial recognition technology to match your selfie with a famous portrait. The portraits are pulled from a database of celebrated works collected from over 1000 museums worldwide, so the chances of a half-decent match should be fairly high, shouldn’t they?
Well now the results are in, and it’s fair to say that they are mixed, at best. This seems to part of the appeal however, as people have begun posting their hilarious (mis)matches online and they are proving to be wildly popular. You can see why!
Sadly, the feature isn’t yet available outside of the U.S. so if you’re not stateside and would like to give it a try, you’ll have to be a little patient. In the meantime however, you can scroll down below and check out what others have been compared with. Don’t forget to vote for your favourite!
A unique effort is underway in Georgia to safeguard elections by taking voting machines back to the future.
“The most secure elections in the world are conducted with a piece of paper and a pencil,” said Georgia State Rep. Scot Turner. “It allows you to continue into the future to verify the result.”
Turner has proposed a bill that would retire Georgia’s electronic touch-screen voting machines and switch to paper ballots that voters would fill out and then be counted by optical scan machines. The technology has been in use for decades to score standardized tests for grade-school students.
“You can try and hack these machines all day long,” Turner said. “But that piece of paper that you can touch and feel and look at is going to give the voter the confidence that the election is actually being recorded the way it should have been.”
But Georgia’s top election official, Secretary of State Brian Kemp, also a Republican, said the electronic voting machines currently in use in Georgia are accurate and efficient and replacing them with paper would be a step backward.
“The fraud we see in Georgia is with paper ballots,” Kemp said. “So, I would be very careful going back to the old days of the hanging chad.”
Hanging chad is a reference to incompletely punched card ballots in Florida that put the outcome of the 2000 presidential race in limbo for 36 days. The delay prompted calls nationwide for upgrades in voting technology.
Georgia went to direct-recording electronic voting machines (DREs). Voters select candidates on a touch-screen computer, which records their choices on an electronic ballot.
Georgia is one of five states still using DREs statewide without a physical paper trail backup. A sixth state, Nevada, uses DREs with a paper trail statewide.
The rest of the nation uses a patchwork of voting systems that vary from state to state and, often, countsy to county.
“I don’t know that there needs to be one specific way to cast a ballot and record a vote, but there are a number of best practices,” said Jeh Johnson, who served as director of Homeland Security during the Obama administration.
Johnson said what’s crucial is redundancy — having a backup system for recounting votes if there’s a technical glitch or deliberate meddling.
“The cyber threat to our country is going to get worse before it gets better,” Johnson said. “Bad cyber actors — whether they’re nation states, cyber criminals, hacktivists, those who engage in ransomware — are increasingly aggressive, tenacious and ingenious.”
Last year, DHS declared America’s election systems as “critical infrastructure” — underscoring the importance of protecting how the nation conducts democracy. Solutions are likely to vary from region to region, just as voting technology varies. And experts say that diversity is part of the protection.
Fox News producer David Lewkowict contributed to this report.