• In two hacker competitions run by Palisade Research, autonomous AI systems matched or outperformed human professionals in demanding security challenges.
  • In the first contest, four out of seven AI teams scored 19 out of 20 points, ranking among the top five percent of all participants, while in the second competition, the leading AI team reached the top ten percent despite facing structural disadvantages.
  • According to Palisade Research, these outcomes suggest that the abilities of AI agents in cybersecurity have been underestimated, largely due to shortcomings in earlier evaluation methods.
  • @Speiser0@feddit.org
    link
    fedilink
    English
    23 days ago

    The paper didn’t include the exact details of this (which made me mad). But if there’s a person actively making parts of the work, and just using an AI chatbot as help, it’s not an AI agent, right, right? So I assumed it’s autonomous.

    • Tar_Alcaran
      link
      fedilink
      English
      13 days ago

      They make frequent comments about using prompts and “AI teams” using “one or more agents”.

      Also, AI agents don’t actually exist, so that’s a pretty clear giveaway.

        • Tar_Alcaran
          link
          fedilink
          English
          33 days ago

          I mean, technically, you can call any controlling sensor an “agent”. Any if-then loop can be an “agent”.

          But AI bros mean “A piece of software that can autonomously perform any broadly stated task”, and those don’t exist in real life. An “AI Agent” is software you can tell to “Order me a pizza”, and it will do it to your satisfaction.

          An AI agent is software you can tell “Hack that system and retrieve the flag”. And it’s not that.