Latest Facebook ‘Apology’ Reveals Safety and Security Disorders

0

Facebook struggled last week. Leaked documents—numerous leaked documents – formed the backbone of a series of reports published in the Wall Street Journal. Together, the stories paint the picture of a company barely mastering its own creation. The revelations run the gamut: Facebook had created special rules for VIPs that largely exempted 5.8 million users from moderation, forced troll farm content on 40% of America, created conditions toxic to teenage girls, ignored cartels and human traffickers, and even undermined CEO Mark Zuckerberg’s own desire to promote COVID vaccination.

Now Facebook wants you to know that it is sorry and that it is trying to do better.

“In the past, we haven’t addressed safety and security issues early enough in the product development process,” the company said in an unsigned press release today. “Instead, we’ve made improvements in a reactive way in response to specific abuse. But we fundamentally changed that approach.

The change, Facebook said, was the integration of safety and security into product development. The press release does not say when the change was made, and a Facebook spokesperson could not confirm for Ars when integrity became more entrenched in the product teams. But the press release says the company’s Facebook Horizon VR efforts have benefited from this process. These were only released in beta last year.

The version seems to confirm that before Horizon’s development, security and safety were secondary aspects that were taken into account after defining the features and writing the code. Or, maybe the issues weren’t resolved until later when users encountered them. Regardless of when it happened, it’s a stunning revelation for a multi-billion dollar business that has 2 billion people as users.

You missed the memo

Facebook isn’t the first company to take a cavalier approach to security, and as such, it hasn’t had to make the same mistakes. In the early days of Facebook, all he had to do was look at one of its major shareholders, Microsoft, who bought special shares in the startup in 2007.

In the late 1990s and early 2000s, Microsoft had its own security challenges, producing versions of Windows and Internet Information Server riddled with security holes. The company began fixing things after Bill Gates made security the company’s top priority in his 2002 memo “Trusted Computing.” One of the results of this push has been the Microsoft Security Development Lifecycle, which calls on managers to “make security everyone’s business.” Microsoft started publishing books on its approach in the mid-2000s, and it’s hard to imagine Facebook engineers ignoring it.

But a security-focused development agenda must have come at a cost that Facebook was unwilling to bear – namely growth. Time and time again, the company has faced choices between solving a safety or security issue or prioritizing growth. It ignored privacy concerns by allowing business partners to access users’ personal data. It killed a plan to use artificial intelligence to fight disinformation on the platform. A few years ago, the focus on groups led to “super guests” able to recruit hundreds of people for the “Stop the Steal” group which ultimately helped foment the Jan. 6 insurgency. United States Capitol. In each case, the company chose to continue to grow first and deal with the consequences later.

“Lots of different teams”

This mindset seems to have been built into the business from the start, when Zuckerberg took an investment from Peter Thiel and copied the “blitzscaling” strategy that Thiel and others used at PayPal.

Today, Facebook is fractured by internal strife caused by growth at all costs. The leaks to the WSJ, said Alex Stamos, the company’s former chief security officer, are a result of the frustrations security and security people feel when they are canceled by growth teams and policy. (Policy teams have their own conflicts – the people who decide what flies on Facebook are the same people who speak with politicians and regulators.)

“The big picture is that several vice presidents and mid-level directors invested and built large quantitative social science teams on the belief that knowing what was wrong would lead to positive change. These teams have encountered the power of growth and unified policy teams ”, Stamos tweeted this week. “It turns out that knowledge isn’t useful when key leaders haven’t changed the way products are measured and employees are paid.”

Even today, there does not seem to be a single person responsible for safety and security in the company. “Our integrity work is made up of many different teams, so it’s hard to say [if there is] a leader, but Guy Rosen is vice president of integrity, ”a Facebook spokesperson told Ars. Perhaps it is telling that Rosen does not appear on Facebook’s top executive list.

So far, Facebook doesn’t seem to have much incentive to change. Its share price has risen by over 50% from a year ago and shareholders do not have much influence given the inordinate power of Zuckerberg’s voting stocks. Growth at all costs is likely to continue. Until, of course, the safety and security concerns get so big that they start to hamper growth and retention. Based on Facebook’s statement today, it’s not clear if the company is still around. If that moment arrives – and if Microsoft’s transition is something to be done – it will be years before a safety and security adoption significantly affects users.

Leave A Reply

Your email address will not be published.