This is a nice post, but it has such an annoying sentence right in the intro:

At the time I saw the press coverage, I didn’t bother to click on the actual preprint and read the work. The results seemed unsurprising: when researchers were given access to AI tools, they became more productive. That sounds reasonable and expected.

What? What about it sounds reasonable? What about it sounds expected given all we know about AI??

I see this all the time. Why do otherwise skeptical voices always have the need to put in a weakening statement like this. “For sure, there are some legitimate uses of AI” or “Of course, I’m not claiming AI is useless” like why are you not claiming that. You probably should be claiming that. All of this garbage is useless until proven otherwise! “AI does not increase productivity” is the null hypothesis! It’s the only correct skeptical position! Why do you seem to need to extend benefit of the doubt here, like seriously, I cannot explain this in any way.

  • ________
    link
    fedilink
    English
    arrow-up
    20
    ·
    5 months ago

    Much like blockchain the FOMO is so strong people are afraid to say it’s bad even when there is nonstop evidence rolling in. With all the data they still are too cowardly to say anything critical.

    • zogwarg
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 months ago

      I feel the C-SUITE executives are pushing the AI way harder than they ever pushed crypto though, since they never understood the tech beyond a speculative asset, but the idea of replacing work-hours by AI-automation has been sold HARD to them.

  • supersquirrel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    5 months ago

    Because we are witnessing the birth of a religion, it just happens to be a lame, very cult-like one that is friends with everyone in power.

  • blakestaceyA
    link
    fedilink
    English
    arrow-up
    14
    ·
    5 months ago

    This was bizarre to me, as very few companies do massive amounts of materials research and which also is split fairly evenly across the spectrum of materials, in disparate domains such as biomaterials and metal alloys. I did some “deep research” to confirm this hypothesis (thank you ChatGPT and Gemini)

    “I know it’s not actually research, but I did it anyway.”

  • mountainriver
    link
    fedilink
    English
    arrow-up
    14
    ·
    5 months ago

    Reads “Does AI make researchers more productive? What? Why would it?”

    Thinks “When does statistically likely text without relation to truth make researchers more productive? Well, when they are faking research”

    Gets to article. Article is about faking research about AI making researchers more productive.

    • o7___o7
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 months ago

      Self-licking ice cream cone As A Service

    • V0ldekOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 months ago

      God what’s the odds that he also used a wisdom woodchipper to produce the text of that pdf lol

  • nightsky
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 months ago

    “For sure, there are some legitimate uses of AI” or “Of course, I’m not claiming AI is useless” like why are you not claiming that.

    Yes, thank you!! I’m frustrated by that as well. Another one I have seen way too often is “Of course, AI is not like cryptocurrency, because it has some real benefits [blah blah blah]”… uhm… no?

    As for the “study”, due to Brandolini’s law this will continue to be a problem. I wonder whether research about “AI productivity gains” will eventually become like studies about the efficacy of pseudo-medicine, i.e. the proponents will just make baseless claims that an effect were present, and that science is just not advanced enough yet to detect or explain it.

  • David GerardMA
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 months ago

    If you’re after streams-crossing - this guy is a rationalist who does Manifold Markets

    • V0ldekOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Well that fully answers the questions I had I guess

      Why is everyone a milkshake duck

      • David GerardMA
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        I mean this post seems largely correct and reasonable, but ehh be a little cautious

  • sturger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    Of course, AI makes people more productive! How do I know? I asked my AI, “Do you make people more productive?” and it said, “Yes.” Just look at all the time it just saved me!

  • glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    10
    ·
    5 months ago

    gpt-like AI is useful for what I’m doing (and others in similar boats), because I’m doing lore and world building for a fictional setting that almost nobody but me knows about. If the lying machine lies, that’s ok, because I can just choose to use it or not.

    Actual research on real world subjects should not use gpt-like AI. They’re trying to discover about an unknown whatever, and the lying machine is of course going to fill in gaps with plausible sounding bullshit.

    Anyone who is both 1) paying attention, and 2) isn’t pushing an agenda, already knows this.