Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • @bitofhope
    link
    English
    109 hours ago

    Sometimes while browsing a website I catch a glimpse of the cute jackal girl and it makes me smile. Anubis isn’t a perfect thing by any means, but it’s what the web deserves for its sins.

    Even some pretty big name sites seem to use it as-is, down to the mascot. You’d think the software is pretty simple to customize into something more corporate and soulless, but I’m happy to see the animal eared cartoon girl on otherwise quite sterile sites.

    • @froztbyte
      link
      English
      64 hours ago

      certainly better than seeing the damned cloudflare Click Here To Human box, although I suspect a number of these deployments still don’t sponsor Xe or the project development :/

  • @scruiser
    link
    English
    89 hours ago

    Should we give up on all altruist causes because the AGI God is nearly here? the answer may surprise you!

    tldr; actually you shouldn’t give because the AGI God might not be quite omnipotent and thus would still benefit from your help and maybe there will be multiple Gods, some used for Good and some for Evil so your efforts are still needed. Shrimp are getting their eyeballs cut off right now!

    • @o7___o7
      link
      English
      48 hours ago

      Stomatopodcasting

  • @corbin
    link
    English
    1120 hours ago

    Alex O’Connor platformed Sabine on his philosophy podcast. I’m irritated that he is turning into Lex Friedman simply by being completely uncritical. Well, no, wait, he was critical of Bell’s theorem, and even Sabine had to tell him that Bell’s work is mathematically proven. This is what a philosophy degree does to your epistemology, I guess.

    My main sneer here is just some links. See, Mary’s Room is answered by neuroscience; Mary does experience something new when color vision is restored. In particular, check out the testimonials from this 2021 Oregon experiment that restored color vision to some folks born without it. Focusing on physics, I’d like to introduce you all to Richard Behiel, particularly his explanations of electromagnetism and the Anderson-Higgs mechanism; there are deeper explanations for electricity and magnets, my dude. Also, if you haven’t yet, go read Alex’s Wikipedia article, linked at the top of the sneer.

    • @TinyTimmyTokyo
      link
      English
      1113 hours ago

      In the case of O’Connor and people like him, I think it’s about much more than his philosophy background. He’s a YouTube creator who creates content on a regular schedule and makes a living off it. Once you start doing that, you’re exposed to all the horrible incentives of the YouTube engagement algorithm, which inevitably leads you to start seeking out other controversial YouTubers to platform and become friendly with. It’s an “I’ll scratch your back if you scratch mine” situation dialed up to 11.

      The same thing has happened to Sabine herself. She’s been captured by the algorithm, which has naturally shifted her audience to the right, and now she’s been fully captured by that new audience.

      I fully expect Alex O’Connor to remain on this treadmill. <remind me in 12months>

    • Alex
      link
      fedilink
      English
      4
      edit-2
      17 hours ago

      What getting c*cked by Mikhaila Peterson does to a mfer.

  • @flaviat
    link
    English
    1121 hours ago

    It’s hard to come up with analogies for AI because it’s so goddamn stupid. It’s like if asbestos was flammable.

    • @fullsquare
      link
      English
      1116 hours ago

      it’s like leaded gasoline for internet - it makes people stupid and aggressive, kids are hit the worst by it, fallout will be felt for decades, cleanup might be hard to impossible, and ultimately it’s a product of corporate greed. except even leaded gasoline solved some problem

      it’s also like gambling as in hook model. it’s like cocaine in that it has been marketed to managerial class as a status symbol of sorts

    • @HedyL
      link
      English
      719 hours ago

      Or like the radium craze of the early 20th century (even if radium may have a lot more legitimate use cases than current-day LLM).

      • @mountainriver
        link
        English
        612 hours ago

        One of the products was removal of unwanted hair. You radiated and the hair just fell off! How practical!

        To be fair to the radium people, I don’t think the correlation between radiation and cancer was established until the aftermath of the bombings of Hiroshima and Nagasaki. Still one could see hair falling of as a warning sign of sorts.

    • @BlueMonday1984OP
      link
      English
      821 hours ago

      Someone I know called AI “a non-invasive procedure to lobotomise people” after I mentioned this Pivot to AI, and its stuck with me ever since

  • @scruiser
    link
    English
    141 day ago

    So this blog post was framed positively towards LLM’s and is too generous in accepting many of the claims around them, but even so, the end conclusions are pretty harsh on practical LLM agents: https://utkarshkanwat.com/writing/betting-against-agents/

    Basically, the author has tried extensively, in multiple projects, to make LLM agents work in various useful ways, but in practice:

    The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.

    The author strips down and simplifies and sanitizes everything going into the LLMs and then implements both automated checks and human confirmation on everything they put out. At that point it makes you question what value you are even getting out of the LLM. (The real answer, which the author only indirectly acknowledges, is attracting idiotic VC funding and upper management approval).

    Even as critcal as they are, the author doesn’t acknowledge a lot of the bigger problems. The API cost is a major expense and design constraint on the LLM agents they have made, but the author doesn’t acknowledge the prices are likely to rise dramatically once VC subsidization runs out.

  • @bitofhope
    link
    English
    132 days ago

    I tried to see if anyone sells chocolate coins modeled after historical gold coinage and the search engine wanted to be, uh, helpful:

    Highlighted portion by Google, not me. Funny how almost everything in the answer is mostly correct, though it’s bizarre to explain this to someone searching with these keywords as if I don’t already know what florins and chocolate coins are if I’m looking for chocolate florins specifically. The only part blatantly wrong is the highlighted lede!

  • @Soyweiser
    link
    English
    112 days ago

    “An AI? But using that you could find a cure for cancer!”

    “But I dont want to make a cure for cancer, i want to generate powerpoint presentations. Look it just made this quarterly_report_june_july_jan.wpd file for me.”

    • @Soyweiser
      link
      English
      72 days ago

      (yes im using the sneerthread to write down bad jokes).

      • @Seminar2250
        link
        English
        62 days ago

        i thought it was pretty funny <3

  • @froztbyte
    link
    English
    82 days ago

    eyeballing the orange site top-frontpage, and:

    shit should come with a strain warning

    • @FRACTRANS
      link
      English
      11
      edit-2
      2 days ago

      AI is important. But we don’t yet fully know why.

      k

    • @FRACTRANS
      link
      English
      62 days ago

      Cal Newport jumpscare (some productivity “influencer” who anxious teen me read)

  • @blakestaceyA
    link
    English
    122 days ago

    I found this because Greg Egan shared it elsewhere on fedi:

    I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.

    It gets worse from there.

    • @YourNetworkIsHaunted
      link
      English
      62 days ago

      Grumble grumble. I don’t think that “optimizing” is really a factor here, since a lot of times the preferred construct is either equivalent (such that) or more verbose (a nonzero chance that). Instead it’s more likely a combination of simple repetition (like how I’ve been calling everyone “mate” since getting stuck into Taskmaster NZ) and identity performance (look how smart I am with my smart people words).

      When optimization does factor in its less tied to the specific culture of tech/finance bros than it is a simple response to the environment and technology they’re using. Like, I’ve seen the same “ACK” used in networking and in older radio nerds because it fills an important role.

      • @bitofhope
        link
        English
        82 days ago

        And much of it is very likely born out of humorous usage. Like “pinging” a colleague with a direct message to see if they’re online. I might even greet my nerdier IT friends with “SYN” or “EHLO”, or a ham with “QSO” in a non-radio context.

        • @mlen
          link
          English
          82 days ago

          A lot of it is, but let’s agree that using “prior” is just fucking pretentious

          • @YourNetworkIsHaunted
            link
            English
            91 day ago

            The way rationalists use “priors” and other bayesian language is closer to how cults use jargon and special meanings to isolate members and tie them more closely to the primary information source (the cult leader). It also serves as a way to perform allegiance to the cult’s ideology, which is I think what’s happening here