Germany tried to make its laws against child pornography stricter and it backfired spectacularly. Now, lawyers and judges are desperately trying not to enforce these laws as the government scrambles to fix them.
Once again, I’m trying to get back on track with the show. My goal is to establish a rhythm which will allow me to release a show every Wednesday.
If you want to know what happened in my life to get me off track for the podcast and some other things this year, I wrote a blog post where I talk about that. We also had a pretty interesting discussion connected to this on the forum.
How Germany’s Stricter Child Porn Laws Backfired
I have often said that the biggest threat to our society are people who mean well, but have no idea what they are doing. Nothing explains what I mean by this better than the current legal situation in Germany when it comes to child pornography – or “child sexual abuse material” (CSAM) as it is now often called.
In 2021, the German government changed the criminal code to provide for much stricter prosecution of people who are caught in the possession of child porn material DE. This was done despite many objections by lawyers, judges and other legal experts who predicated that many innocent people would get caught up in these laws and probably would have to go to jail and have their lives ruined.
Under the new law, §184b Strafgesetzbuch (StGB), possession of child pornography is always a crime, no matter how the material came to be in one’s possession. The minimum penalty is one year in jail.
It took a few years, but now politicians (including from the parties that passed the law in 2021) have realised that these stricter rules were a very bad idea. They are now working to fix the situation DE. Why that is taking much time and why we just simply cannot revert to the prvious version of the law is beyond me, but then, I’m far from being a legal expert.
This 360° turnabout is probably due to highly publicised cases like that of a school teacher who wanted to help a student and is now facing jail time and the very real possibility of never being able to work as a teacher ever again DE. The teacher had learned that students in her class were circulating a video a female student had shot of herself, which was pornographic in nature. The teacher had compelled a student to sent her the video to show to the girl’s parents in an effort to get the situation under control. Thus the teacher fell afoul of the law and was investigated and indicted when the police got involved – even though she admitted everything to everyone from the start and clearly meant no harm.
Even the district attorney and the judge in the case expressed regret to have to prosecute the case, but said the law left them no choice. In similar cases, both state prosecutors and judges have done anything in their power to delay proceedings in the cases DE to give the government time to fix their mess.
(Under German law, all of these cases would have to be dismissed if the law is changed before a judgement has been rendered by a court.)
In the next episode of the podcast, I will take about the European Union’s “Chat Control” law, which will make all of this much worse.
I value your input greatly. If you have any opinions or remarks on the things discussed in this episode, please add a comment at the bottom of this page. You can also use one of the other ways to contact me about this, or any previous episode. Please also write me if you have ideas for things I should cover.
I have to say, your assessment of the scientific process is incorrect. We are not going out to “prove” a hypothesis. See it more as an evidence gathering operation. We are supposed to be weighing information gained in an experiment, in the context of a hypothesis. Normally your null hypothesis is no difference between groups (usually control and treatment). When we get a significant difference, there is evidence to reject the null hypothesis, not that our proposed hypothesis is true. The null hypothesis still could be correct but in this instance, the way we have approached the test we have confidence to reject. When we continually test a system over and over, and we gather enough evidence against the null hypothesis, then we can conclude in fact that our proposed hypothesis is true, well as true as anything can be in sceince. I view the theory of evolution in this light. While still a theory, enough evidence has been presented that I can confidentially reject the null hypothesis (that evolution has not occurred or is not occurring). Now setting type 1 (alpha) and type 2 (beta) error levels is another big problem in sceince that needs to be discussed further.
Science is ultimately about prediction: predicting the outcome of an experiment, predicting the future, if you will. I predict the sun sets today, and raises again tomorrow, and I make that prediction based on the scientific data. You do science by observing some thing and figuring out whether this observation allows you to predict something you weren’t able to predict before (the something is usually the result of some experiment).
The very idea of a null hypothesis and the fact that this idea is central to the scientific method as we know and use it stem from the Occam’s razor: our default suggestion is that the thing we are observing is just a fluctuation, happened by chance, and is not an evidence of anything important (that is, it can’t be used to predict anything). That default suggestion is called the null hypothesis.
You then construct an experiment (usually, a complicated series of experiments) to mathematically show (this usually involves hard-ass statistical methods that you need a colleague to explain to you how to even use them, let alone how they actually work “under the hood”) that your null hypothesis being correct is significantly less likely than your null hypothesis being false. Then you publish a paper making all sorts of (not entirely unfounded) claims based on that fact.
Of course, your conclusion may be wrong for a number of reasons, including the fact that “significantly less likely” in the previous paragraph is actually (almost) never “exactly zero chance” due to how stats and math work. And your error, if indeed wrong you are, can be one of two kinds:
- type 1 error (aka 𝛼-error, aka false positive) is where you end up concluding that your null hypothesis was wrong while it actually is true; in fact, the thing you observed was a meaningless fluctuation, but now you’re trying to make predictions based on it; you’ll fail in the long run;
- type 2 error (aka β-error, aka false negative) is where you end up concluding that you null hypothesis was true, while it actually is false; in fact, the thing you observed was important, but you failed to mathematically prove it, and you will not make the (valid and useful) predictions you could have made.
The two types of error lead to different outcomes, and may have very different costs in the particular field you’re exploring; consider treating patients with a non-working (but harmless) placebo vs. erroneously discarding a potent cancer treament that several billions of dollars have already been invested in; then consider a possibility of not noticing a potentially lethal side effect in a new vitamin pill that no one would really suffer without. Understandably, when resource-limited (that is, always), a scientist tends to set up the experiments so that a chance of type 1 error is much less than a chance of type 2 error (or vice versa, depending on which type is less expensive in the outcome terms); that’s what setting (tolerable) error levels essentially is about.
This podcast is provided free of charge and free of obligations under the value-for-value model. However, as a freelance journalist volunteering my time to produce this show, I need your support. If you like my work and want to make sure The Private Citizen keeps going, please consider joining my Patreon.
- Sir Galteran
- Jaroslav Lichtblau
avis, Bennett Piater, Dave, ikn, Jackie Plage, Jonathan M. Hethey, krunkle, Michael Mullan-Jensen, Tobias Weber
Andrew Davidson, astralc, Barry Williams, Cam, Captain Egghead, Dirk Dede, Fadi Mansour, Florian Pigorsch, Joe Poser, MrAmish, RJ Tracey, Robert Forster
D, Jonathan, Juhan Sonin, Kai Siers, RikyM, Steve Hoos, Vlad
Additional Support by
Eric Le Lay
Thanks to Bytemark, who are providing the hosting and bandwidth for this episode’s audio file.
The show’s theme song is Acoustic Routes by Raúl Cabezalí, licensed via Jamendo Music. This episode’s ending song is Cumbia Pirata by Frontera Bugalú, licensed via Epidemic Sound.
Podcast cover art photo by GegenWind.