- In two hacker competitions run by Palisade Research, autonomous AI systems matched or outperformed human professionals in demanding security challenges.
- In the first contest, four out of seven AI teams scored 19 out of 20 points, ranking among the top five percent of all participants, while in the second competition, the leading AI team reached the top ten percent despite facing structural disadvantages.
- According to Palisade Research, these outcomes suggest that the abilities of AI agents in cybersecurity have been underestimated, largely due to shortcomings in earlier evaluation methods.
The paper didn’t include the exact details of this (which made me mad). But if there’s a person actively making parts of the work, and just using an AI chatbot as help, it’s not an AI agent, right, right? So I assumed it’s autonomous.
They make frequent comments about using prompts and “AI teams” using “one or more agents”.
Also, AI agents don’t actually exist, so that’s a pretty clear giveaway.
An AI agent is just an intelligent agent, see https://en.wikipedia.org/wiki/Intelligent_agent.
Or do you mean that the things they call AI agents aren’t actually AI agents?
I mean, technically, you can call any controlling sensor an “agent”. Any if-then loop can be an “agent”.
But AI bros mean “A piece of software that can autonomously perform any broadly stated task”, and those don’t exist in real life. An “AI Agent” is software you can tell to “Order me a pizza”, and it will do it to your satisfaction.
An AI agent is software you can tell “Hack that system and retrieve the flag”. And it’s not that.