What did I miss? Who is pivoting into hornybots?
They think if they say the criminally insane part out loud it will protect them.
Well, guess I know what I’m using ASI for.
Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.
What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?
Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.
The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.
Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums
Because AI can write programs? As it gets better at doing that, it can make AI’s that are even better, etc etc. Positive feedback loops increase exponentially.
AI can’t write programs, at least not complex programs. The programs / functions it can write well are the ones that are the ones that are very well represented in the training data – i.e. ultra simple functions or programs that have been written and re-written millions of times. What it can’t do is anything truly innovative. In addition, it can’t follow directions, it has no understanding of what it’s doing, it doesn’t understand the problem, it doesn’t understand its solution to the problem.
The only thing LLMs are able to do is create a believable simulation of what the solution to the problem might look like. Sometimes, if you’re lucky, the simulation is realistic enough that the output actually works as a function or program. But, the more complex the problem, or the more distant from the training data, the less it’s able to simulate something realistic.
So, rather than building a ladder where the rungs turn into propellers, it’s building one where the higher the ladder gets, the less the rungs actually look like rungs.
As I said elsewhere, the AI probably isn’t going to just be an LLM. It’s probably gonna be a complex model that uses modules like LLMs to fulfill a compound task. But the exact architecture doesn’t matter.
We know that it can output code, which means we have a quantifiable metric to make it better at coding, and thousands of people are certainly trying. AI video was hot garbage 18 months ago, now it’s basically perfect.
It’s not if we’re going to get a decent coding AI, it’s when.
It’s probably gonna be a complex model that uses modules like LLMs to fulfill a compound task.
That sounds very hand-wavey. But, even the presence of LLMs in the mix suggests it isn’t going to be very good at whatever it does, because LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
We know that it can output code, which means we have a quantifiable metric to make it better at coding
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
It’s not if we’re going to get a decent coding AI, it’s when.
The year 30,000 AD doesn’t count.
LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
So closer to average human intelligence than it would appear. I don’t know why people keep insisting that confidently making things up and repeating things blindly is somehow distinct from the average human intelligence.
But more seriously, this whole mindset is based on a stagnation in development that I’m just not seeing. I think it was Stanford recently released a paper on a new architecture they developed that has serious promise.
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
I think you misunderstand me. The metric is the code. We can look at the code, see what kind of mistakes it’s making, and then alter the model to try to be better. That is an iterative process.
The year 30,000 AD doesn’t count.
Sure. Maybe it’s 30,000AD. Maybe it’s next month. We don’t know when the breakthrough that kicks off massive improvement is going to hit, or even what it will be. Every new development could be the big one.
You’ve misunderstood many things in those two sentences.
Care to elaborate?
the problem is that ai’s are trained on programs that humans have written. At best the llm architectures it creates will be similar to the state of the art that humans have created at that point.
however, even more important than the architecture of an ai model is the training data that it is trained on. If we start including ai-generated programs in this data, we will quickly observe model collapse: performance of models tend to get worse as more ai-generated data is included in the training data.
rather than AIs generating ever smarter new AIs, the more likely result is that we can’t scrape new quality datasets as they’ve all been contaminated with llm-generated data that will only reduce model performance
They could stick to unpoisoned datasets for next token prediction by simply not including data collected after the public release of ChatGPT.
But the real progress they can make is that LLMs can be subjected to reinforcement learning, the same process that got superhuman results in Go, Starcraft, and other games. The difficulty is getting a training signal that can guide it past human-level performance.
And this is why they are pushing to include ChatGPT in everything. Every conversation is a datapoint that can be used to evaluate ChatGPT’s performance. This doesn’t get poisoned by the public adoption of AI because even if ChatGPT is speaking to an AI, the RL training algorithm evaluates ChatGPT’s behavior, treating the AI as just another possible thing-in-the-world it can interact with.
As AI chatbots proliferate, more and more opportunities arise for A/B testing - for example if two different AI chatbots write two different comments to the same reddit post, with the goal of getting the most upvotes. While it’s not quite the same as the billions of games playing against each other in a vacuum that made AlphaGo and AlphaStar better than humans, there is definitely opportunity for training data.
And at some point they could find a way to play AI against each other to reach greater heights, some test that is easy to evaluate despite being based on complicated next-token-prediction. They’ve got over a trillion dollars of funding and plenty of researchers doing their best, and I don’t see a physical reason why it couldn’t happen.
But beyond any theoretical explanation, there is the simple big-picture argument: for the past 10 years I’ve heard people say that AI could never do the next thing, with increasing desperation as AI swallows up more and more of the internet. They have all had reasons about as credible-sounding as yours. Sure it’s possible that at some point the nay-sayers will be right and the technology will taper off, but we don’t have the luxury of assuming we live in the easiest of all possible worlds.
It may be true that 3 years from now all digital communication is swallowed up by AI that we can’t distinguish from humans, that try to feed us information optimized to convert us to fascism on behalf of the AI’s fascist owners. It may be true that there will be mass-produced drones that are as good as maneuvering around obstacles and firing weapons as humans and these drones will be applied against anyone who resists the fascist order.
We may be only years away from resistance to fascism becoming impossible. We can bet that we have longer, but only if we get something that is worth the wait.
I’m not arguing that AI won’t get better, I’m arguing that the exponential improvements in AI that op was expecting are mostly wishful thinking.
they could stick to old data only, but then how do you keep growing the dataset by the amounts that have been done recently? that is where a lot of the (diminishing) improvements the last years have come from.
and it is not at all clear how to apply reinforcement learning for more generic tasks like chatbots, without a clear scoring system like both chess and StarCraft have.
I’m arguing that the exponential improvements in AI that op was expecting are mostly wishful thinking.
You not only have improvements on training methodology, but the models themselves get better, and the superstructure of multiple coordinated specialized models gets better. 3 years ago, AI generated video was nightmare fuel, now it’s basically photorealistic.
AI creating AI is a recursive loop, and the tiniest acceleration amplifies exponentially in a recursive loop. AI programmers are going to become about as good as the average human programmer, it’s inevitable. It won’t be an LLM, it might be a structure of individually trained LLMs, it might be a superstructure of those structures, it might be something else entirely.
Whatever it is, it’s going to happen. And once AI programmers are at least average, they can devote millions of virtual hours to make one a bit better than average, rinse and repeat. Once we hit that point, it skyrockets.
I don’t know when it’ll happen, but I’m damn sure it will happen, and the conditions get more favorable every day.
Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.
There’s not a single world where LLMs cure cancer, even if we decided to give the entirety of our energy output and water to a massive server using every GPU ever made to crunch away for months.
which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…
But LLMs are non of those things. all they can do is look like text.
LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.
down in Ukraine we have a dozen or so image analysis projects that can’t catch a break because all investors can think about are either swarm drones (quite understandably) or LLM nothingburgers that burn through money and dissipate every nine months. Meanwhile those image analysis projects manage to progress on what is basically scraps and leftovers.
the problem is that technical people can understand the value of different AI tool. but tell an executive with a business major how mind blowing it is that a program trained in Go and StarCraft can solve protein folding (studied biology in 2010 and they kept repeating how impossible solving proteins in silico was).
But a chat bot that tells the executive how smart and special it is?
That’s the winner.
yeah, that’s tough to beat
Multimodal LLMs are definitely a thing, though.
yhea, but it’s better to use the right tool for the job than throwing a suitcase full of tools at a problem
Not strictly LLMs, but neural nets are really good at protein folding, something that very much directly helps understanding cancer amount other things. I know an answer doesn’t magically pop out, but it’s important to recognise the use cases where NN actually work well.
I’m trying to guess what industries might do well if the AI bubble does burst. I imagine there will be huge AI datacenters filled with so-called “GPUs” that can no longer even do graphics. They don’t even do floating point calculations anymore, and I’ve heard their integer matrix calculations are lossy. So, basically useless for almost everything other than AI.
One of the few industries that I think might benefit is pharmaceuticals. I think maybe these GPUs can still do protein folding. If so, the pharma industry might suddenly have access to AI resources at pennies on the dollar.
integer calculations are lossy because they’re integers. There is nothing extra there. Those GPUs have plenty of uses.
I don’t know too much about it, but from the people that do, these things are ultra specialized and essentially worthless for anything other than AI type work:
anything post-Volta is literally worse than worthless for any workload that isn’t lossy low-precision matrix bullshit. H200’s can’t achieve the claimed 30TF at FP64, which is a less than 5% gain over the H100. FP32 gains are similarly abysmal. The B100 and B200? <30TF FP64.
Contrast with AMD Instinct MI200 @ 22TF FP64, and MI325X at 81.72TF for both FP32 and FP64. But 653.7TF for FP16 lossy matrix. More usable by far, but still BAD numbers. VERY bad.
AI isn’t even the first or the twentieth use case for those operations.
All the “FP” quotes are about floating point precision, which matters more for training and finely detailed models, especially FP64. Integer based matrix math comes up plenty often in optimized cases, which are becoming more and more the norm, especially with China’s research on shrinking models while retaining accuracy metrics.
But giving all the resources to LLMs slows/prevents those useful applications of AI.
And it’s clear we’re nowhere near achieving true AI, because those chasing it have made no moves to define the rights of an artificial intelligence.
Which means that either they know they’ll never achieve one by following the current path, or that they’re evil sociopaths who are comfortable enslaving a sentient being for profit.
they’re evil sociopaths who are comfortable enslaving a sentient being for profit.
i mean, look what is happening in the united states. that would be completely unsurprising to happen here.
It’s DEFINITELY both.
They sure do cure horny though.
There are tons of AIs besides the chat bots. There are definitely cancer hunter seekers.
Good thing I said “LLM” not “AI”.
Um, human history has repeatedly demonstrated that when a new technology emerges, the two highest priorities are:
- How can we kill things with this?
- How can we bone with this?
False dichotomy.
People using AI to cure cancer are not the people implementing weird chatbots. Doing one has zero effect on the other.
False dichotomy. Living is Faux.
It’s making fun of things like: “ChatGPT boss predicts when AI could cure cancer”.
You’re getting downvoted because of how you put it. Most people do not understand the difference between AI used for research (like protein sequencing) and LLMs.
Also, the people making LLMs are not making protein sequencers.
I agree, for most people ‘AI’ is ChatGPT and their perception of the success of AI is based on social media vibes and karma farming hot takes, not a critical academic examination of the state of the field.
I’m not remotely worried about the opinions of random Internet people, many of which are literally children just dogpiling on a negative comment.
Reasonable people understand my point and I don’t care enough about the opinions of idiots to couch my language for their benefit.
You’re my role model for the day
No, OP is about how OpenAI said they were releasing a chatbot with PhD level intelligence about half a year ago (or was it a year ago?) and now they are saying that they’ll make horny chats for verified adults (i.e. paying customers only).
What happened to the PhD level intelligence Sam?! Where is it?
Exactly. Gen AI is a very large field.
Ah I see the misunderstanding. Government pivoting is the problem.
NIH blood cancer research was defunded few months ago while around same time government announced they will be building 500-billion datacenters for LLMs.
“If LLM becomes AGI we won’t need the image-recognition linear algebra knowledge anymore, obviously.”
Researchers are still the good and appreciated no matter what annoying company is deploying their work.
Ow, that’s why they are restricting “organic” porn, to sell AI porn. Damn.
If you’ve ever wondered why porn sites use pictures of cars, buses, stop signs, traffic lights, bicycles and sidewalks in their captchas, it’s because they’re using the data to train car-driving AIs to recognize those patterns.
This is not what an imminent breakthrough in cancer research looks like.
Source?
Google recaptcha? They literally talk about this publically. It’s in their mission statement or whatever. It’s used to train other kinds of model too.
They were. They haven’t been using recaptchas to collect trainings day for years now.
Y’know, it’s bullshit that a) you seem to expect this to be common knowledge, as if everyone is supposed to have an archive of internet minutiae saved in their heads or have read and remembered any such info at all…
And b) you chose to downvote and pretty much just said LMGTFY without even the sarcastically provided results instead of backing up your claim. It’s basic courtesy to provide a source for claims instead of downvoting like it’s some kind of affront to your ego that someone wants info on your claim.
It’s not even my claim you are talking about jackass. Read the usernames. If you have fallen into the rabbit hole that is Lemmy you should have been around enough to know about recapcha. If not it’s one DuckDuckGo search away. In fact you could just click the link on the recapcha itself that explains how they use the data for training. Hardly arcane knowledge.
Your comment to me read like Sealioning.
Ah, that makes it so much better. My bad for you jumping into an argument randomly? You’re not improving my view of the shitty attitude here when you double down on “you should have known.”
No money in curing cancer with an LLM. Heaps of money taking advantage if increasingly alienated and repressed people.
You could sell the cure for a fortune. Imagine something that can reliably cure late stage cancers. You could charge a million for the treatment, easily.
Yes, selling the actual cure would be profitable…but an LLM would only ever provide the text for synthesizing it but none of the extensive testing, licensing, or manufacturing, etc… An existing pharmaceutical company would have to believe the LLM and then front the costs for the development, testing, and manufacture…which constitutes a large proportion of the costs of bringing a treatment to market. Burning compute time on that is a waste of resources, especially when fleecing horny losers is available right now. It is just business.
and LLMs hallucinate a lot of shit they “know” nothing about. a big pharma company spending millions of dollars on an LLM hallucination would crack me the fuck up were it not such a serious disease.
Right, that is why I originally said there is no money in a cancer cure invented by LLM. It’s just not a serious possibility.
We are closer to making horny chatbots than a superintelligence figuring out a cure for cancer.
Actually, if the latter wins, would that super AI win a Nobel prize?
what if my kink is curing cancer?
It would probably go to whoever uses it to find the cure… And to none of the authors who wrote the data that it was trained on
To be fair, a better pattern finder could indeed lead to better ways of curing cancer.
I appreciate that this post is using dark mode
FYI, using openAI/ChatGPT is expensive. Programming it to program users into dependency on its “friendship” gets them to pay for more tokens, and then why not blackmail them or coerce/honeypot them into espionage for the empire. If you don’t understand yet that OpenAI is arm of Trump/US military, among its pie in the sky promises is $35B for datacenters in Argentina.
Porn can pay your way through school, so to speak
every once in a while i think about selling feet pics (mine are recognizable) but i don’t think people want pictures of my genders feet
I would totally do that if I could. I looked into Only Fans and something like 1% of dudes make any money and the upkeep is a lot of work.
But how else would it find the hard lump on yoir testicles?