{"id":498,"date":"2026-04-28T12:13:32","date_gmt":"2026-04-28T10:13:32","guid":{"rendered":"https:\/\/www.nikostotz.de\/blog\/?p=498"},"modified":"2026-04-28T12:54:47","modified_gmt":"2026-04-28T10:54:47","slug":"ai-will-end-anonymity-on-the-internet","status":"publish","type":"post","link":"https:\/\/www.nikostotz.de\/blog\/ai-will-end-anonymity-on-the-internet\/","title":{"rendered":"AI will end anonymity on the Internet"},"content":{"rendered":"\n<p><strong>tl;dr<\/strong>&nbsp;AI slop is here to stay. We cannot&nbsp;<em>detect<\/em>&nbsp;AI content, thus we must&nbsp;<em>mark<\/em>&nbsp;human content. This requires to identify the human to a varying degree. The fundamental technology is there, and will be integrated in browsers.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\">AI slop is here to stay<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/www.nikostotz.de\/blog\/wp-content\/uploads\/2026\/04\/c6-envelope-scaled.jpeg\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"770\" src=\"https:\/\/www.nikostotz.de\/blog\/wp-content\/uploads\/2026\/04\/c6-envelope-1024x770.jpeg\" alt=\"\" class=\"wp-image-499\" srcset=\"https:\/\/www.nikostotz.de\/blog\/wp-content\/uploads\/2026\/04\/c6-envelope-1024x770.jpeg 1024w, https:\/\/www.nikostotz.de\/blog\/wp-content\/uploads\/2026\/04\/c6-envelope-300x226.jpeg 300w, https:\/\/www.nikostotz.de\/blog\/wp-content\/uploads\/2026\/04\/c6-envelope-768x577.jpeg 768w, https:\/\/www.nikostotz.de\/blog\/wp-content\/uploads\/2026\/04\/c6-envelope-1536x1155.jpeg 1536w, https:\/\/www.nikostotz.de\/blog\/wp-content\/uploads\/2026\/04\/c6-envelope-2048x1540.jpeg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n\n\n\n<p>I tried to search for C6 envelope templates on the web \u2013 hardly a topic to bring forth the next billionaire. Nonetheless, two-thirds of the webpages were clearly AI generated. (Current research estimates&nbsp;<a href=\"https:\/\/ai-on-the-internet.github.io\/\">\u2153 of new websites to be AI slop<\/a>.)<\/p>\n\n\n\n<p>My ad-blocker saved me, but seemingly even slop sites for such a benign topic are economically viable. This won\u2019t stop once Claude et al.&nbsp;<a href=\"https:\/\/martinalderson.com\/posts\/no-it-doesnt-cost-anthropic-5k-per-claude-code-user\/#heading-2\">charge profitable fees<\/a>&nbsp;(read: &gt; $ 1000 per user per month) for their service. Such a slop site is there for the long game. The economical incentive doesn\u2019t change if it takes a local model on a tiny machine a week to create the site.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Detecting AI content is impossible<\/h2>\n\n\n\n<p>AI can create content at superhuman speed, overwhelming any manual review system. Therefore, we would need to rely on automated detection systems. The only realistic chance are&nbsp;<a href=\"https:\/\/www.nature.com\/articles\/s41598-026-35203-3\">AI systems that detect AI content<\/a>. We can only lose this arms race: While creating the slop site, the &#8220;author&#8221; can run it against AI detectors and fine-tune until it passes.<\/p>\n\n\n\n<p>Watermarking does work technically for higher bandwidth content (images, audio, video), but the &#8220;authors&#8221; easily can use custom AI systems that don\u2019t create watermarks.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">We need to recognize genuine human content<\/h2>\n\n\n\n<p>We can\u2019t detect AI content, but still want to enjoy human-made content. But why?<\/p>\n\n\n\n<p>In the C6 envelope example above, none of the AI slop sites hosted&nbsp;<em>actual<\/em>&nbsp;templates. They contained lots of generic text on the topic, and linked to templates on, or stole templates from, other sites. They made it harder to find what I was actually looking for.<\/p>\n\n\n\n<p>Skimming a site for downloadable templates at least doesn\u2019t take too much mental effort. But for a lot of slop sites, we can only determine their uselessness after reading for some time. They consume our mental resources without providing value.<\/p>\n\n\n\n<p>At least such slop sites are only after our money (via ads). Others&nbsp;<em>want<\/em>&nbsp;to misinform us. Recognizing such attempts could very well prove to be vital for our societies.<\/p>\n\n\n\n<p>Future progress on AI also needs to recognize human-written texts, as LLMs&nbsp;<a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07566-y\">trained on generated output collapse<\/a>.<\/p>\n\n\n\n<p>Current AI crawlers overload lots of websites by bombarding them with thousands of requests. These crawlers ignore the website\u2019s rules (robots.txt) about the part of the website they should leave alone. A website can terminate any connection to non-human counterparts.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">If you can\u2019t detect the enemy, identify the allies<\/h2>\n\n\n\n<p>If we can\u2019t detect the bad apples, we need to identify the good ones. We can\u2019t rely on &#8220;I\u2019m a human, pinky swear&#8221; promises, as any AI can forge such statements. We need a (mostly) tamper-proof system of identification.<\/p>\n\n\n\n<p>Thankfully, we don\u2019t need to invent such a system from scratch: the public key infrastructure maintained and enforced by&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/CA\/Browser_Forum\">CA\/Browser forum<\/a>&nbsp;serves a similar purpose. It makes sure that visiting&nbsp;<a href=\"https:\/\/www.wikipedia.org\/\">http<strong>s<\/strong>:\/\/www.wikipedia.org\/<\/a>&nbsp;shows us the contents of Wikipedia, without anybody tampering with the contents. It is fully integrated in the browser, we don\u2019t need to do anything special.<\/p>\n\n\n\n<p>It is a very complex, but mostly working system, that avoids all-powerful gatekeepers: the &#8220;trust roots&#8221; are called&nbsp;<em>certificate authorities<\/em>, and we can have lots of them; any website can participate; and all browsers support the standard.<\/p>\n\n\n\n<p>https gives us users the trust that we see the genuine website \u2013 but how can the website trust the user? We also have existing technology for this: the&nbsp;<a href=\"https:\/\/www.w3.org\/TR\/webauthn-2\/\">Web Authentication API<\/a>&nbsp;standard defines how a website can trust a browser to identify the user in front of it, e.g. by&nbsp;<a href=\"https:\/\/www.w3.org\/TR\/webauthn-2\/#test-of-user-presence\">user presence<\/a>, meaning pressing a physical button.<\/p>\n\n\n\n<p>These technologies are building blocks for a future standard that reliably identifies humans. To fight AI slop this standard doesn\u2019t need to be bullet-proof; it must only make AI slop more expensive than&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Troll_farm\">troll farms<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Scam_center\">scam centers<\/a>. (Such a standard might even help against the latter exploitations, if it included revocation of misused anonymous identities.)<\/p>\n\n\n\n<p>This would not stop humans from&nbsp;<em>using<\/em>&nbsp;AI to create their content, and that\u2019s ok: Use cases like translation, grammar fixes, and picture touch-up, are legitimate and helpful AI applications. As long as a human is involved, and takes responsibility for the result, this doesn\u2019t turn the tide of AI slop.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">All your identity are belong to us?<\/h2>\n\n\n\n<p>Does this mean every website will know everything about every visitor? Thankfully, no. Standards like&nbsp;<a href=\"https:\/\/www.iso.org\/standard\/69084.html\">ISO 18013-5<\/a>&nbsp;(also used in the&nbsp;<a href=\"https:\/\/github.com\/eu-digital-identity-wallet\/av-app-android-wallet-ui\">European age verification app<\/a>) allow for different levels of attestation. Similar technology will be part of every web browser, the same as https is now. The &#8220;trust root&#8221; will be countries, the same as for official documents like driver\u2019s license or passport.<\/p>\n\n\n\n<p>Each website can choose the level of attestation it requires:<\/p>\n\n\n\n<dl class=\"wp-block-simple-definition-list-blocks-list\">\n<dt class=\"wp-block-simple-definition-list-blocks-term\">unknown<\/dt>\n\n\n\n<dd class=\"wp-block-simple-definition-list-blocks-details\">No attestation at all \u2013 same as today\u2019s public websites.This will be the Internet\u2019s&nbsp;<em>4chan<\/em>. It exists, some may visit it for specific use-cases, but it will generally be ignored. Search engines will not show content from such sites (at least by default).<\/dd>\n\n\n\n<dt class=\"wp-block-simple-definition-list-blocks-term\">anonymous<\/dt>\n\n\n\n<dd class=\"wp-block-simple-definition-list-blocks-details\">The website only asks that the user&nbsp;<em>is<\/em>&nbsp;a human, it doesn\u2019t care&nbsp;<em>who<\/em>&nbsp;the user is, and never gets this information.This will be the default for most websites, akin to a reliable version of today\u2019s &#8220;I\u2019m not a bot&#8221; checkboxes. The aforementioned standards ensure that only&nbsp;<em>serious<\/em>&nbsp;collusion between browsers, internet providers, website owners, and countries might lead to the actual identity of the visitor.<\/dd>\n\n\n\n<dt class=\"wp-block-simple-definition-list-blocks-term\">pseudonymous<\/dt>\n\n\n\n<dd class=\"wp-block-simple-definition-list-blocks-details\">The website asks for an\u00a0<em>alias<\/em>\u00a0(or <em>username<\/em>) of the human user. The\u00a0<em>same<\/em>\u00a0website always gets the same alias for the same user, but a\u00a0<em>different<\/em>\u00a0website will get a different alias.<br \/>This will be for websites like forums that in general don\u2019t care\u00a0<em>who<\/em>\u00a0the user is, but want to prevent the\u00a0<em>same<\/em>\u00a0user to create multiple accounts. Depending on the implementation, the website together with the originating country might reveal the actual identity of the visitor.<\/dd>\n\n\n\n<dt class=\"wp-block-simple-definition-list-blocks-term\">identified<\/dt>\n\n\n\n<dd class=\"wp-block-simple-definition-list-blocks-details\">The website asks for a proven identity of the human user.This is already used today for websites like banks, company-internal systems, or government agencies. The website owner already knows the user\u2019s identity. The user also&nbsp;<em>expects<\/em>&nbsp;to be identified \u2013 nobody wants anonymous logins to their bank account!<\/dd>\n<\/dl>\n\n\n\n<dl class=\"wp-block-simple-definition-list-blocks-list\"><\/dl>\n\n\n\n<h2 class=\"wp-block-heading\">The end of free Internet?!<\/h2>\n\n\n\n<p>Lots of people consider anonymous Internet usage as standard. This might not be warranted \u2013 most countries (including countries that&nbsp;<a href=\"https:\/\/www.npr.org\/2026\/03\/25\/nx-s1-5752369\/ice-surveillance-data-brokers-congress-anthropic\">claim to value freedom<\/a>) have technical abilities to identify users via their IP address.&nbsp;<em>Actual<\/em>&nbsp;anonymity requires&nbsp;<a href=\"https:\/\/www.torproject.org\/download\/\">serious effort<\/a>&nbsp;and constant vigilance on any potentially identifying information.<\/p>\n\n\n\n<p>I\u2019m not saying I&nbsp;<em>like<\/em>&nbsp;this trajectory. I argue we won\u2019t have a good choice \u2013 either drown in AI slop or give up some level of anonymity. Done right, we might win at least a bit of consolation:&nbsp;<em>actual anonymity<\/em>&nbsp;would still be possible with the right tools;&nbsp;<em>anonymous<\/em>&nbsp;websites could be really anonymous, also towards governments;&nbsp;<em>pseudonymous<\/em>&nbsp;forum access could be sufficient to catch&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/CSAM\">CSAM<\/a>&nbsp;offenders and prevent governments from enforcing more stringent surveillance measures; and&nbsp;<em>identifying<\/em>&nbsp;to banks and government websites could become a lot less of a hassle.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>tl;dr&nbsp;AI slop is here to stay. We cannot&nbsp;detect&nbsp;AI content, thus we must&nbsp;mark&nbsp;human content. This requires to identify the human to a varying degree. The fundamental technology is there, and will be integrated in browsers.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[38,39],"class_list":["post-498","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-ai","tag-anonymity"],"_links":{"self":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts\/498","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/comments?post=498"}],"version-history":[{"count":5,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts\/498\/revisions"}],"predecessor-version":[{"id":504,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts\/498\/revisions\/504"}],"wp:attachment":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/media?parent=498"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/categories?post=498"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/tags?post=498"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}