In this episode, Ubiquiti explains to us how to not run a company these days: Put things in the cloud needlessly, fuck up on security not once, but twice, and then mislead your customers about it.

Today’s episode of The Private Citizen covers a recent investigation by Brian Krebs into a whistleblower claim that the networking hardware manufacturer Ubiquiti was covering up a huge data breach.

But before we get into that, I’d like to share this article in Balkan Insight that I am quoted in with you. It came about because the writer listens to this podcast.


This podcast was recorded with a live audience on my Twitch channel. Details on when future recordings take place can usually be found on my personal website. Recordings of these streams get saved to a YouTube playlist for easy watching on demand after the fact.

Ubiquiti: A Textbook Case on How to Fuck Up Big Time

So what exactly happened at Ubiquiti? First off, a little background on the company and what they do: Ubiquiti is a network device manufacturer that started out selling nice routers and WiFi access points, at relatively low price points, to consumers and small businesses. They have since expanded into the big business sector. Years ago, their stuff was all the rage with geeks looking for powerful, but cheap, home setups. Today, they mostly advertise IoT and smart home features.

I mostly know off them because they’ve come under fire for their practices in using open source code a few times. They’ve been accused of violating the GPL in the past, which in at least one case also made it harder for people to fix a security issue with open source code in some of their devices that the company had caused. But the thing they’ve got into the news with recently is even dumber.

Mistake #1: Let’s Put It All into The Cloud!

At some point in the past, someone at Ubiquiti (probably management or some idiotic consultant) had the idea to require cloud login for all their devices. Keep in mind that we are talking about routers and similar devices here. So they rolled out a feature that changed the storage of access credentials for these devices from being on the device itself to have this data stored in a cloud storage. This mandatory change was incorporated into firmware updates for all devices made by the company that were supported at the time.

Let me repeat this: A router manufacturer made it mandatory to log into your router, which sits locally on your network, via a cloud authentication mechanism.

That in itself is insane enough for me to never use a product made by that company. And plenty of their customers were upset with this change, too. They never rolled it back, though. Which bit them in the arse when the hackers started knocking on their door.

Mistake #2: Be Clueless about Security

According to a whistleblower who works for the company and who talked to Brian Krebs, hackers got access to Ubiquiti’s servers at AWS.

The attackers had gained administrative access to Ubiquiti’s servers at Amazon’s cloud service, which secures the underlying server hardware and software but requires the cloud tenant (client) to secure access to any data stored there. “They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration.”

With other words, Ubiquiti didn’t secure their AWS account properly. And to make matters worse, they also didn’t secure the data in the AWS-based servers either.

How did they get into the AWS account you ask? By hacking the LastPass account of someone who works in IT at Ubiquiti.

The attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.

And with that, because Ubiquiti was so dumb to require cloud-based authentication through these AWS servers, the hackers could now access their customer’s devices, which by definition are often exposed to the internet and can be found relatively easily in bulk.

Such access could have allowed the intruders to remotely authenticate to countless Ubiquiti cloud-based devices around the world. According to its website, Ubiquiti has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.

According to the whistleblower, this happened in December.

Ubiquiti’s security team picked up signals in late December 2020 that someone with administrative access had set up several Linux virtual machines that weren’t accounted for. Then they found a backdoor that an intruder had left behind in the system.

When security engineers removed the backdoor account in the first week of January, the intruders responded by sending a message saying they wanted 50 bitcoin (~$2.8 million USD) in exchange for a promise to remain quiet about the breach. The attackers also provided proof they’d stolen Ubiquiti’s source code, and pledged to disclose the location of another backdoor if their ransom demand was met.

Ubiquiti did not engage with the hackers and ultimately the incident response team found the second backdoor the extortionists had left in the system. The company would spend the next few days furiously rotating credentials for all employees, before Ubiquiti started alerting customers about the need to reset their passwords.

Mistake #3: Propagandise Hard

So, they’ve been caught with their pants down not once, but twice. What do they do? Admit to having fucked up and try their best to do right my their customers, who are now in the firing line? Of course not! Let’s try and cover it all up, like any good corporate sleazeball would.

Instead of asking customers to change their passwords when they next log on – as the company did on Jan. 11 – Ubiquiti should have immediately invalidated all of its customer’s credentials and forced a reset on all accounts, mainly because the intruders already had credentials needed to remotely access customer IoT systems.

“Ubiquiti had negligent logging (no access logging on databases) so it was unable to prove or disprove what they accessed, but the attacker targeted the credentials to the databases, and created Linux instances with networking connectivity to said databases,” the whistleblower wrote in his letter. “Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period.”

Here’s the statement Ubiquiti put out originally (before the whistleblower contacted Krebs):

We are not currently aware of evidence of access to any databases that host user data, but we cannot be certain that user data has not been exposed. This data may include your name, email address, and the one-way encrypted password to your account (in technical terms, the passwords are hashed and salted). The data may also include your address and phone number if you have provided that to us.

Why did they all-but-lie to their customer? The whistleblower says it was to protect their stock price. And it worked.

Ubiquiti’s stock price has grown remarkably since the company’s breach disclosure Jan. 16. After a brief dip following the news, Ubiquiti’s shares have surged from $243 on Jan. 13 to $370 as of today. By market close Tuesday, UI had slipped to $349.

Until Krebs published the whistleblower’s story.

Update, Apr. 1: Ubiquiti’s stock opened down almost 15 percent Wednesday; as of Thursday morning it was trading at $298.

They were eventually forced to disclose more details, although they are still channelling Bernays hard.

Nothing has changed with respect to our analysis of customer data and the security of our products since our notification on January 11. In response to this incident, we leveraged external incident response experts to conduct a thorough investigation to ensure the attacker was locked out of our systems. These experts identified no evidence that customer information was accessed, or even targeted. The attacker, who unsuccessfully attempted to extort the company by threatening to release stolen source code and specific IT credentials, never claimed to have accessed any customer information. This, along with other evidence, is why we believe that customer data was not the target of, or otherwise accessed in connection with, the incident.

So there were credentials (and even source code) stolen but they expect us to believe the user data and access credentials to people’s devices is fine? LOL.

As Brian Krebs points out with unerring journalistic instincts:

Ubiquiti’s statement largely confirmed the reporting here by not disputing any of the facts raised in the piece. And while it may seem that Ubiquiti is quibbling over whether data was in fact stolen, the whistleblower said Ubiquiti can say there is no evidence that customer information was accessed because Ubiquiti failed to keep logs of who was accessing its databases.

So here’s a good trick we have to remember: Just don’t keep any logs. That way you can always claim you had no idea what was happening. That’s some Stalin-level information manipulation there. Masterfully executed, gotta give them that.

What a clusterfuck.

Producer Feedback

Evgeny Kuznetsov writes:

Enjoyed the scientific method episode, great job, Fab! You should have mentioned Popper, I think, so as to specifically point out the crucial difference between science and religion, but other than that – a remarkable job of explaining things in layman’s terms!

I have to explain the scientific method to acolytes (being that I teach at a university, you know), so I can appreciate the work you’ve put into prepping this. I usually concentrate less on peer review (since my audience usually has a good enough idea about that) and more on how the ultimate merit of a theory is its power of prediction, not its ability to explain the already known facts, but that’s my audience; for a general audience yours was a great explanation, truly.

Fadi Mansour writes a rather longer piece of feedback, but it is worth quoting almost in full, because what he says is very intelligent :

I understand your frustration about how “science” (in quotes!) is being presented, which is essentially probably the exact opposite of what it should mean. And somehow you see that there’s a global acceleration of this phenomenon, or at least this is what I’m seeing in my own bubble!

Apologies for not writing lately, but I have to see I’m more enthusiastic to the last couple of episodes than the ones before. Don’t get me wrong, the topics and your take on them were interesting, but personally I tend to be not so much interested in the news themselves, but rather in discussing what they mean and going back to discussing principles. So these latest two episodes really hit the spot.

First of all let me comment on the purpose that you put for these kind of episodes, as ground-work for later topics. I think this is really important, as in my own experience, I feel that discussion tend to fail when the people engaged are coming from totally different “background” and assumptions. By background, I don’t specifically mean country or religion specifically, but what they essentially hold as “true” (basic assumptions).

One important point I feel needs to be highlighted is the word “fact” or “the data says”. In my opinion, this is not accurate or helpful: Data does not say anything by itself and “facts” don’t actually exist outside of abstract sciences like mathematics. On the other hand, what we actually have is observation and interpretation. We observe something in the world, we record it to the best of our abilities and then we interpret what that tells us about “reality”.

Reality of the “outside” world is something that is inherently unknown to us, and we need to spend the effort to observe and interpret in order to come to an understanding of the “reality” we live in. So if the goal of science is to understand or define reality, then, for me, it’s a very long way, and I’m not sure how long it would take humanity to get to “reality”, if we will ever.

In the meantime, from my personal perspective, I would rather avoid overusing the word “fact”, and just keep in mind: observations and interpretations.

To illustrate and share an interesting story about what “data says”, in WW II there was a need to fortify airplanes, and when seeing the airplanes coming back into the hangars riddled with bullets, the initial conclusion was to place more armor to the areas that are hit the most. i.e. this is what the data might have “told” somebody.

But a mathematician called Abraham Wald had a different idea: He claimed it was more important to place armor on the areas of the plane without combat damage (e.g., bullet holes) than to place armor on the damaged areas. Any combat damage on returning planes, Wald contended, represented areas of the plane that could withstand damage, since the plane had returned to base. Wald reasoned that those planes that were actually hit in the undamaged areas he observed would not have been able to return. Hence, those undamaged areas constituted key areas to protect. A plane damaged in said areas would not have survived and thus would not have even been observed in the sample. Therefore, it would be logical to place armor around the cockpit and engines, areas observed as sustaining less damage than a bullet-riddled fuselage.

Speaking about science and data and “trusting the science”, I have to point out something: While it’s important to have the details of scientific findings available, so that someone can check, but it would be difficult for anybody to be able to grasp all of the details of scientific findings, and will, in the end, have to rely on “trusting” someone, without having to verify everything by oneself.

I will now switch to a point that came in a feedback from superuser, about executive decision making. I understand that in some cases, decision need to be made without complete knowledge. And this should be normal, as we will never have complete knowledge, but either by laziness or malice, it is more convenient to paint decisions as being made based on irrefutable facts, than admitting the inherent cost-benefit calculation being made at every decision making point.

So, a more understandable way of making decision, is laying bare this calculation, and letting people critique the inherent assumptions of value in this cost-benefit calculations. But if the goal is to achieve a pre-set conclusion, then a “fact” is more convenient!

And finally: a small point about pronounciation. In the episode you tried to pronounce Ibn Al-Haitham, and actually your first attempt was the best, as it should be pronounced with a soft ‘th’ – IPA: [θ].

The last two episodes (63 & 64) seem to have generated a lot of interest. Here’s another prespective from Christoph Martin:

I really enjoy your shows (started listening with LO) and though the topic of the last episodes are a good chance to give you some feedback. When I read the titles of the last two episodes (The Scientific Method, The Problem with Facts) I was kinda skeptical but listening to it got me thinking again about how people outside of academia perceive science.

There are a few points I could rage about in this context. E.g. how we cripple science with temporary contracts, the peer-review process, and academic publishing (especially the causa Elsevier). But I don’t wont waste your time with that and most listeners won’t have a benefit from that hence here are my thoughts what I think many people not working in academia get wrong (this is by no means intended to sound arrogant!):

Science is hard, and conducting reliable and reproducible experiments in the social sciences is especially hard. I come from a technical field and often take it for granted that I have plenty of data to work with. But in the social sciences collecting data if often very labor-intensive and error-prone. Think about a simple survey: first you have to design the questions in a way such that they don’t bias the questions in a specific way and then you have to find actual people who are willing to spend (or from their perspective: waste) their time on your survey. Hence data used in studies is often severely restricted in size and limited to a certain population (many studies survey students since they are easily accessible, have some time, and be lured with small monetary rewards or ECTS credits).

Another important aspect which makes social sciences difficult: it is hard to perform good experiments, especially on the macro scale: there is only one society and one global market. This and other effects have lead to a so called replication crisis. What is especially concerning in the context of “The Problem with Facts”: Questions that are relevant to politicians like, for example, the effect of universal basic income the impact of raising the interest rates are hard to answer for scientists in an experimental way.

Another problem is that often the methods we use cannot answer the actual question we are interested in – especially the infamous hypothesis testing. This applies to non-bayesian hypothesis tests or “classical” hypothesis test. Of course, statisticians are aware of these and other (e.g. p-value hacking) limitations but there is a huge amount of inertia in academia. Roughly speaking, hypothesis tests (e.g. a t-test) are used to test whether there is some effect. Usually, when you read something is “statistically significant” some hypothesis test is involved. These methods are powerful but come with an inherent flaw, they measure the probability of the observed data, given a null hypothesis (P(data|H0)). But often this get confused with the probability of the hypothesis given the observed data (P(H0|data)). Hence, scientists cannot “accept” a hypothesis with this method, we merely do reject the null hypothesis.

Non-academic outlets, newspapers etc. often report his as “scientists have shown that xyz is true” which not correct. Also, the notion “statistical relevance” requires a threshold at which the observed data is so unlikely, that we reject the null – this is called the significance level. The actual value of this level depends on the field of research. For example, in Psychology 0.05 is often chosen. Roughly speaking, this is the probability that the null hypothesis is rejected even if it is true.

From a frequentist perspective this means, if you read 100 studies and they experimental setup and everything else is perfectly fine, 5% of them will come to wrong conclusions solely based on way the test works. Another example why this is a problem: If your null hypothesis is “there is no effect of a drug/intervention etc.” as it is often the case, you only need to collect enough data and at some point you results will become “statistically significant” – either due to a very low (but for practical purposes irrelevant) effect or due to randomness.

Butterbeans chimes in again from Tennessee after a bit of a break in feedback. And I will even include it into the show, even though he double spaces his periods:

Long time no talk. Been very busy with work here and unable to send any feedback for the last several shows. However, you’ve woken me from my slumber with a throwaway line from your last show about the scientific method (which I enjoyed very much): “LibreOffice is better than MS Office” (paraphrased).

No, no it’s not. LibreOffice sucks compared to MS Office. Has it improved dramatically over the years? Yes. Is it a functional office software suite? Yes. Does it beat MS Office for usability and functionality? Hell no. Just to set the record straight, I feel like I’ve given LibreOffice a fighting chance. The laptop I’m typing this email to you on runs Ubuntu 20.04; my server runs FreeBSD; my gaming PC runs Win10. I use this laptop the most out of my personal computers and I’ve tried to utilize LibreOffice when I need to be productive, instead of switching over to my work laptop or gaming PC (which also has Office installed). I’ll break it down into a few bullets:

  • MS Office has many more micro productivity improvements than LibreOffice. For heavy Office users (I use Excel more than any other Office program), MS Office’s micro productivity improvements like smarter navigation with arrow keys, autofilling of cells, and data management through tools like pivot tables are vastly superior to LibreOffice. When you use office software a lot, the little things start to make a big difference!
  • Exchange’s calendar is a killer app unto itself. I know Exchange is a poorly coded security risk inside an organization (loved that podcast, btw), but it’s also a productivity machine. Our office has 10 employees and we’re constantly travelling, and scheduling with Exchange is so easy. It works so well. When everything is clicking, there’s nothing that comes close.
  • Teams is getting better and Zoom should be scared. I still thing Zoom is a better service overall, but Teams is now coming bundled with Office and it’s good enough. It’s only a matter of time before IT depts start looking at how expensive Zoom is and deciding it’s not worth paying for. Hell, if MS buys Discord they’ll probably integrate some of its technology into Teams as well. Who knows?

Regardless, fanboy slap fighting over office software is probably the dumbest to give feedback, but I got to step up for Office. I’ve basically built my career in Excel and I’ll defend it fervently.

Keep up the great work, my brother. Always enjoying hearing your thoughtful perspective. Please continue doing shows on broad topics like socialism, “cyber war,” and the scientific method. If ever you decide to venture down the history route (perhaps you’ve thought about it and decided against it), I’d love to hear that too. You talk often about Bismarck, why not do a show or two about him and his politics?

If you have any thoughts on the things discussed in this or previous episodes, please feel free to contact me. In addition to the information listed there, we also have an experimental Matrix room for feedback. Try it out if you have an account on a Matrix server. Any Matrix server will do.

Toss a Coin to Your Podcaster

I am a freelance journalist and writer, volunteering my free time because I love digging into stories and because I love podcasting. If you want to help keep The Private Citizen on the air, consider becoming one of my Patreon supporters.

You can also support the show by sending money to via PayPal, if you prefer.

This is entirely optional. This show operates under the value-for-value model, meaning I want you to give back only what you feel this show is worth to you. If that comes down to nothing, that’s OK with me. But if you help out, it’s more likely that I’ll be able to keep doing this indefinitely.

Thanks and Credits

I like to credit everyone who’s helped with any aspect of this production and thus became a part of the show. This is why I am thankful to the following people, who have supported this episode through Patreon and PayPal and thus keep this show on the air: Georges, Butterbeans, Michael Mullan-Jensen, Jonathan M. Hethey, Niall Donegan, Dave, Steve Hoos, Shelby Cruver, Vlad, Jackie Plage, 1i11g, Philip Klostermann, Jaroslav Lichtblau, Kai Siers, ikn, Michael Small, Fadi Mansour, Dirk Dede, Bennett Piater, Joe Poser, Matt Jelliman, David Potter, Larry Glock, Mika, Martin, Dave Umrysh, tobias, MrAmish, RikyM, drivezero, m0dese7en, avis, Jonathan Edwards, Barry Williams, Sandman616, Neil, Captain Egghead, Rizele, D and Iwan Currie.

Many thanks to my Twitch subscribers: Mike_TheDane, Galteran, m0dese7en_is_unavailable, l_terrestris_jim, Flash_Gordo, centurioapertus, indiegameiacs, Sandman616 and redeemerf.

I am also thankful to Bytemark, who are providing the hosting for this episode’s audio file.

Podcast Music

The show’s theme song is Acoustic Routes by Raúl Cabezalí. It is licensed via Jamendo Music. Other music and some sound effects are licensed via Epidemic Sound. This episode’s ending song is Simpler Life by Bo the Drifter.