In 30 years the world will be an ecological wasteland from all the energy usage we spent pursuing dumb shit hype like “AI”.
It seems we are heading towards the fallout timeline.
That would be the best case scénario
He tweeted, with a ghibli-slop avatar
Running LLM in 30 years seems really optimistic
how so? they can’t make locally run LLMs shit and I assume hardware isn’t going to get any worse
There are local LLMs, they’re just less powerful. Sometimes, they do useful things.
The human brain uses around 20W of power. Current models are obviously using orders of magnitude more than that to get substantially worse results. I don’t think power usage and results are going to converge enough before the money people decide AI isn’t going to be profitable.
The power consumption of the brain doesn’t really indicate anything about what we can expend on LLMs… Our brains are not just biological implementation of the stuff done with LLMs.
It gives us an idea of what’s possible in a mechanical universe. It’s possible an artificial human level consciousness and intelligence will use less power than that, or maybe somewhat more, but it’s a baseline that we know exists.
You’re making a lot of assumptions. One of them being that the brain is more efficient in terms of compute per watt compared to our current models. I’m not convinced that’s true. Especially for specialized applications. Even if we brought power usage below 20 watts, the reason we currently use more is because we can, not that each model is becoming more and more bloated.
Yeah, but a LLM has little to do with a biological brain.
I think Brain-Computer Interfaces (BCIs) will be the real deal.
I was thinking in a different direction, that LLMs probably won’t be the pinnacle of AI, considering they aren’t really intelligent.
ohhh
Assuming they would be enough food to maintain and fix that hardware, I’m not confident that we will have enough electricity to run LLM on massive scale
It literally runs on my phone, and is at least decent enough at pretending to care that you can vent to it.
Unlike you bigots, I’ve already masturbated to AI generated images
This guy’s name translates to something like “Matt Cock”
How so?
Matti is a Finnish name, and in Finnish, “Palli” means “cock”.
Source: I am Finnish
Nice! But he’s Icelandic, and both names there are variants of his real name. No cock connection 😂
Source: I am Icelandic.
Yea, you can translate palli as cock too but usually palli means ball (testicle).
Look, I’m not robophobic. Some of my best friends are cyborgs. I just don’t want them living in my neighborhood, you know?
Found the robosexual
Kiss robots all you like I’m cool with it. Just don’t do it around me.
Big difference with cyborg and robots. cyborgs are augmented humans.
Yeah, jeez, that sort of mechanophic language should be illegal
Robosexuality is wrong!!!
I knew I should’ve shown him Electro-Gonorrhea: The Noisy Killer.
Let’s not pretend statistical models are approaching humanity. The companies who make these statistical model algorithms proved they couldn’t in 2020 by OpenAI and also 2023 DeepMind papers they published.
To reiterate, with INFINITE DATA AND COMPUTE TIME the models cannot approach human error rates. It doesn’t think, it doesn’t emulate thinking, it statistically resembles thinking to some number below 95% and completely and totally lacks permanence in it’s statistical representation of thinking.
We used to think some people aren’t capable of human intellect. Had a whole science to prove it too.
If modern computers can reproduce sentience, then so can older computers. Thats just how general computing is. You really gonna claim magnetic tape can think? That punch-cards and piston transistors can produce the same phenomenon as tens of billions of living brain cells?
That in general seems more plausible than doing it specifically with an LLM.
Slightly yeah, but I’m still overall pretty skeptical. We still don’t really understand consciousness. It’d certainly be convenient if the calculating machines we understand and have everywhere could also “do” whatever it is that causes consciousness… but it doesn’t seem particularly likely.
Ten years ago I was certain that a natural language voice interface to a computer was going to stay science fiction permanently. I was wrong. In ten years time you may also be wrong.
Well, if you want one that’s 98% accurate then you were actually correct that it’s science fiction for the foreseeable future.
And yet I just forsaw a future in which it wasn’t. AI has already exceeded Trump levels of understanding, intelligence and truthfulness. Why wouldn’t it beat you or I later? Exponential growth in computing power and all that.
The diminishing returns from the computing power scale much faster than the very static rate (and in many sectors plateauing rate) of growth in computing power, but if you believe OpenAI and Deepmind then they’ve already proven INFINITE processing power cannot reach it from their studies in 2020 and also in 2023.
They already knew it wouldn’t succeed, they always knew, and they told everyone, but we’re still surrounded by people like you being grifted by it all.
EDIT: I must be talking to a fucking bot because I already linked those scientific articles earlier, too.
Thanks for the abuse. I love it when I’m discussing something with someone and they start swearing at me and calling me names because I disagree. Really makes it fun. /s You can fuck right off yourself too, you arrogant tool.
I think most people understand that these LLM cannot think or reason, they’re just really good tools that can analyze data, recognize patterns, and generate relevant responses based on parameters and context. The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.
Then you clearly haven’t been paying attention, because just as zealously as you defend it’s nonexistent use cases there are people defending the idea that it operates similar to how a human or animal thinks.
My point is that those people are a very small minority, and they suffer from issues that go beyond their ignorance of these how these models work.
I think they’re more common than you realize. I think people ignorance of how these models work is the commonly held stance for the general public.
You’re definitely correct that most people are ignorant on these models work. I think most people understand these models aren’t sentient, but even among those who do, they don’t become emotionally attached to these models. I’m just saying that the people who end up developing feelings for chatbots go beyond ignorance. They have issues that require years of therapy.
The difference is that the brain is recursive while these models are linear, but the fundamental structure is similar.
The difference is that a statistical model is not a replacement for an emulation. Their structure is wildly different.
removed by mod
How many electricity powered machines processing binary data via crystal prisms did we see evolve organically?
removed by mod
The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.
I don’t know if it’s an urban myth, but I’ve heard about 20% of LLM inference time and electricity is being spend on “hello” and “thank you” prompts. :)
It’s a very real thing. So much so that OpenAI actually came out and publicly complained about how it’s apparently costing the company millions.
It’s already happening to me, but it’s over things like privacy, not recording every bit of your life for social media and kids blowing crazy amounts of money on F2P games.
What’s all this about having to accept NEW TOS for Borderlands 2. I purchased the game five years ago, but if I want to play today i have to accept a greater loss of privacy!
When I was young you would find out about a video game from the movies! And they were complete! Any you couldn’t take the servers offline, because they didn’t exist!
But for real, fuck Randy Pitchford
I fully support the robosexual lifestyle.
Grab them by the robussy
The type of guy to say “clanka” with a hard r
I’m not American, can you explain what the hard r means?
Saying the N word with an R at the end is consider extra offensive.
Thanks, is that like a southern accent thing or, just kinda because
Black people saying it with an A as in rap music is generally considered a camaraderie thing, as opposed to white people saying it with an R is considered a racist thing. White people aren’t supposed to say it at all, but it’s MUCH less acceptable in the latter pronunciation.
It just kinda is I guess. I am not really the person to ask.
Nah, it’s all good, just trying to get my head around it
Black folks often use the N word casually to refer to each other as a form of taking back the word’s meaning. It used to be used exclusively in a racist fashion. The primary difference is that with the African American accent, the ending sound -ER is changed to more of an -UH sound. Sometimes, rarely and depending on the context, it is allowable for non-black people to say it with this accented pronunciation. But under no circumstances is it in good taste to use the original -ER ending to refer to a black person as a non-black person, that form is only used as a slur. When people refer to the “Hard R”, this is what they are talking about, the difference between the accented pronunciation as slang vs the original pronunciation intended as a slur.
Thanks for that explanation!
deleted by creator
24 YEARS AGO!
/me crumbles to dust.
I refuse to believe that was almost a quarter of a century ago.
Step 1: Give Robots Voting Rights
Step 2: ???
Step 3: Plot twist, all those Robots are actually under direct control of the Evil Corporation Inc. and they already won every future election.
Long Live the Cyberlife CEO!
Still preferable to current timeline