But the new disclosure could help bolster Musk’s argument and potentially encourage the court to pay closer attention to the bot issue. Moreover, Musk’s legal team could attempt to seize on other claims in the disclosure unrelated to bots — including allegations that Twitter made misrepresentations to regulators such as the Federal Trade Commission and Securities and Exchange Commission about its privacy and security practices — as additional reasons he should be able to walk away from the deal.
“For years, across many public statements and [SEC] filings, Twitter has made material misrepresentations and omissions … regarding security, privacy and integrity,” Zatko’s disclosure states. “Twitter’s misrepresentations are especially impactful, given that they are directly at issue in Elon Musk’s contemplated takeover of the company.”
Zatko, better known as “Mudge,” is a prominent ethical hacker-turned-cybersecurity executive whose career also included stops at Google and the Department of Defense. He was hired as Twitter’s security lead following a major hack at the company in 2020 and fired in January of this year, a move he claims came after he tried to blow the whistle internally about security deficiencies and alleged possible fraud by the company’s senior leaders.
His disclosure paints a picture of a company rife with security vulnerabilities that threaten user data and the platform’s functionality, and which he says could put US national security at risk. Zatko also alleges that Twitter’s top executives have misled users, regulators and even the company’s own board about the condition of its information security. “Please open an investigation into legal violations by Twitter,” the disclosure states.
A Twitter spokesperson said in a statement to CNN in response to the disclosure that Zatko was fired for “ineffective leadership and poor performance.”
“What we’ve seen so far is a false narrative about Twitter and our privacy and data security practices that is riddled with inconsistencies and inaccuracies and lacks important context,” the spokesperson said. “Mr. Zatko’s allegations and opportunistic timing appear designed to capture attention and inflict harm on Twitter, its customers and its shareholders. Security and privacy have long been company-wide priorities at Twitter and will continue to be.”
Twitter CEO Parag Agrawal on Tuesday wrote an internal memo to employees, obtained by CNN, vowing to challenge the allegations in the disclosure and seeking to reassure employees, calling the allegations “frustrating and confusing to read.”
On Tuesday, after news of Zatko’s disclosure broke, Musk lawyer Alex Spiro said the billionaire’s legal team had already subpoenaed Zatko in the dispute with Twitter. “We found his exit and that of other key employees curious in light of what we have been finding,” Spiro told CNN.
‘No appetite’ to properly measure bots
In February 2019, Twitter announced it would start using a new metric to quantify the size of its audience when the company reported its financial results each quarter. The company, which had been facing a decline in users for several quarters, said it would shift from disclosing monthly active users — a metric commonly used by social media companies — to reporting monetizable daily active users (mDAU), a measure of the number of real users who could be shown an ad on the platform.
Since making the switch, Twitter has reported that fake and spam accounts make up less than 5% of mDAUs, a figure it has repeated in its fight with Musk and one the billionaire has called into question. (Twitter has acknowledged in SEC filings that figure relies on significant judgement that may not accurately reflect reality.)
Twitter, Zatko’s disclosure claims, actually considers bots to be a part of a category of millions of “non-monetizable” users that it does not report. The 5% bots figure that Twitter shares publicly is essentially an estimate, based on human review, of the number of bots that slip through into the company’s automated count of monetizable daily active users, the disclosure states. So while Twitter’s 5% of mDAU bots figure may be useful in indicating to advertisers the number of fake accounts that might see but be unable to interact with their ads, the disclosure alleges that it does not reflect the full scope of fake and spam accounts on the platform.
The disclosure also points to another tweet in Agrawal’s May thread in which he stated that Twitter is “strongly incentivized to detect and remove as much spam as we possibly can, every single day.” Zatko alleges that, contrary to Agrawal’s statement, the company’s executives were instead incentivized by business pressures and bonus structures to grow mDAU, and in some cases did so at the expense of dedicating resources and attention to addressing the amount of spam on the platform.
Zatko says he began asking about the prevalence of bot accounts on Twitter in early 2021, and was told by Twitter’s head of site integrity that the company didn’t know how many total bots are on its platform. (Twitter told CNN Zatko’s statement lacks necessary context.)
Zatko also alleges that he came away from conversations with the integrity team with the understanding that the company “had no appetite to properly measure the prevalence of bots,” in part because if the true number became public, it could harm the company’s value and image.
Twitter’s systems to measure and remove bots also consist of “mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed and reactive human teams,” the disclosure states.
“The executive team, the board, the shareholders and the users all deserve an honest answer as to what it is that they are consuming as far as data and information and content on the platform,” he told CNN in an interview earlier this month. “Your whole perception of the world is made from what you are seeing, reading and consuming online. And if you don’t have an understanding of what’s real, what’s not … yeah, I think this is pretty scary.”
Twitter says that it allows bots on its platform, but its rules prohibit those that engage in spam or platform manipulation. But, as with all social media platforms’ rules, the challenge often lies in enforcing such policies.
The company says it regularly challenges, suspends and removes accounts engaged in spam and platform manipulation, including typically removing more than one million spam accounts each day. Twitter confirmed that the number of spam accounts as a percentage of mDAU is distinct from the total number of fake and spam accounts on the platform. But the company added that it believes the total number would not be useful because it could include accounts Twitter has already taken action on, and it does not believe it could catch all such accounts and thus the number would be a minimum count.
In the disclosure, Zatko alleges that without more context, it’s hard to fully understand the figures Twitter reports about taking down spam and fake accounts. The disclosure questions whether the number “is a lot or a little, for a platform as vast as Twitter? No one knows because there is no denominator provided for context.”
Twitter did not respond to a request to provide the total number of accounts on the platform, or the average number of accounts added daily, as context around the bot removal figure.
Bots may not be the only issue
Much of the dispute between Twitter and Musk has focused on bots — an issue that legal experts have said may not be material to the deal even if Twitter was found to have misstated the numbers. But following the disclosure, Musk’s legal team could also choose to focus on some of Zatko’s other serious allegations.
For example, Zatko’s disclosure alleges that Twitter has lax security practices and a lack of emergency plans, which could threaten to take down the servers that keep the platform running, potentially permanently — a so-called “Black Swan” event that he claims nearly occurred in the spring of 2021.
“Twitter has consistently misrepresented in SEC filings its capacity to recover from even a brief outage of only a few data centers,” according to the disclosure. The disclosure makes reference to the risk factors the company lists in its annual report, which states that it has a “disaster recovery program” in case of damage to its data centers. Zatko alleges that recovery program may not be “functional enough” to prevent a Black Swan event.
Twitter did not respond to specific questions about the risk of data center outages, but said it continuously invests in its teams and technology to ensure the platform’s security. And a source close to the matter told CNN that the platform had systems in place to address privacy, security and health-related risks for years before Zatko joined the company that have continued since his departure.
The disclosure also alleges that Twitter is in violation of a 2011 consent order that resulted from a lawsuit by the Federal Trade Commission, in which the company vowed to clean up its act around security and user data privacy. Zatko alleges that despite its claims to the contrary, Twitter executives are aware that the company has “never been in compliance” with the order.
Twitter said it is in compliance with relevant privacy rules and that it has been transparent with regulators about its efforts to fix any shortcomings in its systems.
The disclosure also claims that some of the shortcomings Zatko identified while leading the company’s security could create issues that would constitute a “material adverse effect,” a legal term for a change in a company’s circumstances that would significantly reduce its value, and the sort of risk that could give Musk greater leverage to get out of the deal.
The disclosure points to a section in Twitter and Musk’s merger agreement in which the company affirmed it does not “infringe, misappropriate or otherwise violate any Intellectual Property Rights of any other Person” in a way that would constitute a material adverse event. However, the disclosure alleges that Twitter has failed to obtain the appropriate licenses for the data it uses to train its artificial intelligence — which is used in key Twitter features such as the algorithm it relies on to rank what tweets users see.
“Twitter senior leadership have known for years that the company has never held the proper licenses to the data sets and/or software used to build some of the key Machine Learning models used to run the service,” the disclosure states.
The acquisition agreement defines a material adverse effect as a change or event that has or would result in material harm to “the business, financial condition or results of operations of Twitter,” with several exceptions including those caused by economic or political conditions and “acts of God” such as terrorism, cyberattacks or data breaches. It would likely be up to a court to decide exactly what issues would fall under that classification. But the disclosure claims that litigation by any of the owners of the intellectual property used to train Twitter’s AI could result in “massive monetary damages” to Twitter or an injunction that could affect its ability to operate key products, which it alleges could constitute a material adverse effect.
“Unless circumstances have changed since Mudge was fired in January, Twitter’s continued operation of many of its basic products is most likely unlawful,” the disclosure alleges.
Twitter did not respond to questions about the allegation that it does not have the proper intellectual property rights for the data used to train its AI.