A Case Study in Security by Design & the Cost of Failure

·

Tina Simpson, JD, MSPH, Principal

Tina Simpson, JD, MSPH

Principal

Two weeks ago, the Washington Post broke a story on the former Twitter Executive and Head of Security, Peiter Zatko, blowing the whistle on Twitter’s privacy and security practices.

Zatko alleges the company is “reckless and negligent” in its cybersecurity practices and breached the company’s 2011 Consent Order with the Federal Trade Commission (FTC). A review of the (redacted) complaint Zatko filed with the U.S. Securities and Exchange Commission (SEC), which the Washington Post has since made available on their website, makes for a gripping read.

ABSORBING THE GRIEVANCE.

There is a lot to digest in the 84-page complaint. But while most press attention has focused on how this impacts the company’s current dispute with Elon Musk as it relates to (among other things) bot accounts and national security, I want to focus on something a little more prosaic. That is this: design matters. If you build it – they will come – but the usability, effectiveness, and integrity of that product, platform, or process will continue to be fundamentally and pervasively impacted by the original foundations, purposes, and principles. If security matters, if privacy matters (and they do), those controls, pathways, and culture must be integrated upfront into any product’s lifecycle or organizational culture. For many startups or organizations launching a new product or service line, there is so much attention on speed, financing, and direct competition that the internal work (which includes those operational controls) is shelved for later when we are no longer “building the plane while it’s in the air.”

I get it. I’ve been there. Without a contingency plan for when things hit the fan, there’s nothing but chaos when procedures go haywire with no way to put out the fire. While you can’t defer the market demands and timeline, you still must build out the infrastructure or set yourself up for failure.  

Because I also know this: Going back to correct and add on necessary (regulatorily required) components requires a significant investment of time, materials, and personal capital. Too often – whether because an organization’s team is then on the back foot because they are perpetually in a reactive, firefighting mode or because management is not incentivized to enforce or investigate deficiencies (both cited as root causes in Dr. Zatko’s allegations) – those investments are usually not made. Even where there is the best intention to correct and integrate adequate cybersecurity practices and to align privacy controls with public terms of service (and where there is a straightforward legal requirement to do so, as evidenced by Twitter’s FTC Consent Order), when those intentions come up against the demands of an existing system and its daily demands and priorities, the goal of “one day we will get there” is a dream repeatedly deferred.

BROKEN INFRASTRUCTURE LEADS TO UNWANTED INCIDENTS.

To illustrate this, let’s turn to the crux of Dr. Zatko’s complaint. In 2010, the FTC launched an investigation into Twitter’s privacy and security practices following a reported security breach that compromised users’ private information. This included, on two separate occasions, the hijacking of the accounts of various national leaders, including the account of then-President-Elect Barrack Obama. The FTC concluded that the company’s failure to institute basic security controls conflicted with public statements regarding their privacy and security practices. Among those (basic) security failures was the fact that the company failed to restrict access to administrative platform controls (“God mode”) and lack of segregation between the platform and software development and testing space.

At the time, “almost all employees” had administrative controls. This is a problem for…obvious reasons. It increases the “attack surface” and opportunity for external parties to infiltrate the platform and compromise Twitter operations, systems, and user information (as happened in 2009 and 2020). Twitter and the FTC entered a consent order (like a settlement agreement) in 2011. In addition to a fine ($150 million), Twitter agreed to “establish and implement, and thereafter maintain a comprehensive information security program reasonably designed to protect the security, privacy, confidentiality, and integrity of nonpublic consumer information.” This includes mechanisms to identify and inventory cyber threats.

Fast forward nearly ten years, and Twitter experienced a second “hacking” by a group of teenagers. Again, high-profile accounts were hijacked, amounting to a global security incident, and translating to shutting down system access for days. Again, the root cause analysis for this breach: too many people had too much unrestricted, unsegregated access across the platform. Indeed, over a thousand employees had administrative level control, and, per Zatko’s allegation, “about half” of all employees have access to “sensitive live production systems and user data.” That is how you get a couple of teenagers stress testing a system by calling Twitter employees pretending to be IT support and a global security incident. After this, Twitter founder and then CEO Jack Dorsey recruited Dr. Zatko to demonstrate that Twitter was serious (this time, for realsies) about security.

DOES TWITTER NEED TO INVEST IN CHANGE?

There is (of course) a lot more in the complaint to the SEC. This includes allegations that Twitter misled investigators or conducted fraud in its annual reports under the Consent Order and that management withheld information from the Twitter Board or intentionally misled the Board regarding its progress on its Information Security program. But I want to narrow in on this central architectural issue: Twitter’s failure to restrict access, segregate controls and segregate the live platform from the engineering space (consistent with a Software Development Life Cycle principles and practices).

This is basic, this is foundational. It is also pretty telling that this one thing wasn’t done. Of course, I don’t know why this (substantive) segregation has not occurred. But I suspect that it is at least partly because that is just not how the platform works. And even if the software accommodates that level of segregation you probably have a culture and environment where that just isn’t how things are done. Changing it would require…well, change, and an investment of capital, time, and a shift in focus and attention to something that costs money rather than makes money.  

And that brings me back to the beginning: design matters. Dr. Zatko’s allegations illustrate this principle. While the complaint had my jaw dropping (surely it couldn’t be this bad?!) his complaint felt, familiar. I had seen this show before, it was….Silicon Valley. If you haven’t seen the HBO series, I’ll try not to spoil it for you (but what are you waiting for?).  The premise is a start-up tech company seeking to build a better internet always on the brink of paradigm-shifting success or abject failure that (usually) pulls through by some deus ex machina or brilliant repurposing of existing software applications (we call it a #pivot). They are scrappy, they are frenetic, they are changing the world (“to make the world a better place”) – but they are also the victims of their design – and fail to anticipate how what they built will evolve when it moves from a split-level house in the valley to the multi-national leagues.

The bottom line is this:  Build your product, your company, and your culture to not only meet the minimum-viable requirements – anticipate, and build in the security, privacy and compliance components that you will need. Otherwise you too may find yourself facing some awkward questioning from regulators (suuuuper awkward given conclusion of another Consent order regarding the inadvertent use of private information for marketing purposes), missing out on the odd $40 billion acquisition, or just generally having to forego #Hawaii.

Tina Simpson, JD, MSPH, Principal
ABOUT THE AUTHOR

Tina Simpson, JD, MSPH

Tina started her legal career as an Assistant Attorney General for the North Carolina Department of Justice. In administrative rule-making, board management, and public procurement, she represented various state organizations, such as the NC Division of Medicaid and the Office of the State Treasurer. After eight years, Tina pursued her Masters of Science in Public Health at UNC Gilling’s School of Global Public Health.