• @essteeyou@lemmy.world
    link
    fedilink
    English
    466 days ago

    This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.

    It’s not like I can afford to compete with OpenAI on bandwidth, and they’re burning through money with no cares already.

    • @bane_killgrind@slrpnk.net
      link
      fedilink
      English
      286 days ago

      Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?

      Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it’s going to be indistinguishable from real large data sets.

      • @essteeyou@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        6 days ago

        When I deliver it as a response to a request I have to deliver the gzipped version if nothing else. To get to a point where I’m poisoning an AI I’m assuming it’s going to require gigabytes of data transfer that I pay for.

        At best I’m adding to the power consumption of AI.

        I wonder, can I serve it ads and get paid?

        • @MonkeMischief@lemmy.today
          link
          fedilink
          English
          15 days ago

          I wonder, can I serve it ads and get paid?

          …and it’s just bouncing around and around and around in circles before its handler figures out what’s up…

          Heehee I like where your head’s at!