Has been a while since AI were introduced into the daily basis of the users around all the internet. When it firstly came I was curious yeah like everyone and tried some prompts to see “what this thing can do”, then, I never, ever, used AI again, because I really never saw it like something necesary, we had automatic systems already, so, the time keep moving to me until this day, when I realized something: how people is dependent of this shit. I mean, REALLY is dependent, and then they go like “I only used it for school 😢” like, are you serious dude? Do you leave your future to an algorithm? Coming back with my question, years have passed, I do think we all have an opinion more developed about AI, what do you think? Fuck it and use it anyways? If that is the case, why blame companys to make more accessible it’s use? Like microsoft putting copilot even in notepad. “Microsoft just wants to compile your data.” Isn’t LLM about that? Why blame them if you are going to use the same problem with different flavor? Not defending Microsoft here, I’m only using it like an example, change it for the company of your own preference.
I want actual AI, and not even necessarily for anything other than answering the question of “can we make a sentient being that isn’t human?”
What is being sold as AI isn’t anything cool, or special, or even super useful outside of extremely specific tasks that are certainly not things that can be sold to the general public.
I find it a little bit useful to supplement a search engine at work as a dev but it can’t write code properly yet.
I can see it doing a lot of harm in the ways has been implemented unethically, and in some cases we don’t have legal resolution on whether it’s “legal” but I think any reasonable person knows that taking an original artist’s work, and making a computer generate counterfeits is not really correct.
I think there is going to be a massive culling of people who are charlatans anyways, and whose artistic output is meritless. See 98% of webcomics. Most pop music. Those are already producing output that is so flavorless and bland it might as well have come from AI model. Those people are going to have to find real jobs that they are good at.
I think the worst of what AI is going to bring is not even in making art, music, video, shit like that… It’s going to be that dark pattern stuff where human behavioral patterns and psychology is meticulously analyzed and used against us. Industries that target human frailties are going to use these heavily.
Effective communication will become a quaint memory of the past that seniors rant about.
I’m a fan generally of LLMs for work, but only if you’re already an expert or well versed at all in whatever you’re doing with the model because it isn’t trust worthy.
If you’re using a model to code you better already know how that language works and how to debug it because the AI will just lie.
If you need it to make an SOP then you better already have an idea for what that operation looks like because it will just lie.
It speeds up the work process by instantly doing the tedious parts of jobs, but it’s worthless if you can’t verify the accuracy. And I’m worried people don’t care about the accuracy.
Except for a very few niche use cases (subtitles for hearing-impaired) almost every aspect of it (techbros, capitalism, art-theft, energy-consumption, erosion of what is true etc etc) is awful and I’ll not touch it with a stick.
I’m tired of people’s shit getting stolen, and I’m tired of all the AI bullshit being thrown in my face.
It was fun for a time when their API access was free so some game developers put llms into their games. I liked being able to communicate with my ships computer, but quickly saw how flawed it was.
“Computer, can you tell me what system we’re in?”
“Sure, we’re in the Goober system.”
“But my map says we’re in Tweedledum.”
“Well it appears that your map is wrong.” Lol
I’m much more concerned about the future when “AGI” is actually useful and implemented into society. “We” (the ownership class) cannot accept anything other than the standard form of ownership. Those that created the AGI own the AGI and “rent” it to those that will “employ” the AGI. Pair that with more capable robotics being currently developed and there will be very little need for people to work most jobs. Because of the concept of ownership we will not let go of, if you can afford to live then you just die. There will be no “redistribution” to help those who cannot find work. We will start hearing more and more about “we don’t need so many people, X billion is too many. There isn’t enough work for them to support themselves.” Not a fun future…
The llms have impressive mathematics but can it cure cancer or create world peace? No. Can it confuse people by pretending to be human? Yes. Put all that compute to work solving problems instead of writing emails or basic code or customer service and I’ll care. I hear that AlphaFold is useful. I want to hear more about the useful machine learnimg.
Fun toy to play with, but everytime I tried to use is for real work, it ended up so bad that I spent more time than doing it properly from scratch
I have less to say about the tech side than I do about this whole forced mass adoption of LLMs and how I’ve seen people react to it doing things.
I agree that they’re unethically made by stealing data. That’s indisputable. What I fail to grasp is what the purpose of hating a technology is. Blame and responsibility are weird concepts. I’m not knowledgeable in philosophy or anything related to this. What I can tell, however, is that hating on the tech itself distracts people from blaming those actually responsible, the humans doing the enshittification. The billionaires, dictators…
(tangent) and I’d go as far as to say anyone who politically represents more people than they know personally is not the type of politician that should be allowed. Same if they have enough power to violence a lot of people. But this is just my inner anarchist speculating how an ethical society with limited hierarchy might work.
“What I can tell, however, is that hating on the tech itself distracts people from blaming those actually responsible, the humans doing the enshittification. The billionaires, dictators…”
^– SattaRIP^
That’s something I’ve been trying to convince people of that I converse with about LLMs and similar generative technology. I’ve met so many people that just throw a big blanket of hate right over the entire concept of the technology and I just find it so bizarre. Criticize the people and corporations using the tech irresponsibly! It’s like a mass redirection of what and who is really to blame. Which I think is partially because “AI” is something that sort of anthropomorphizes itself to a large portion of society and most people think the “personality within” the technology is responsible for the perceived misdeeds.
I figure when all is said and done and historians and researchers look back on this time, there will be a lot to learn about human behavior that we likely have little grasp of at the moment.
It’s just like any big technological breakthrough. Some people will lose their jobs, jobs that don’t currently exist will be created, and while it’ll create acute problems for some people, the average quality of life will go up. Some people will use it for good things, some people will use it for bad things.
I’m a tech guy, I like it a lot. Before COVID, I used to teach software dev, including neural networks, so seeing this stuff gradually reach the point it has now has been incredible.
That said, at the moment, it’s being put into all kinds of use-cases that don’t need it. I think that’s more harmful than not. There’s no need for Copilot in Notepad.
We have numerous AI tools where I work, but it hasn’t cost anyone their job - they just make life easier for the people who use them. I think too many companies see it as a way to reduce overheads instead of increasing output capability, and all this does is create a negative sentiment towards AI.
- I find it useful for work (I am a software developer/tester).
- I think it’s about as good as it’s ever going to get.
- I believe it is not ever going to be profitable and the benefits are not worth reopening nuclear and coal power plants.
- If US courts rule that training AI with copyrighted materials is fair use, then I will probably stop paying for content and start pirating it again.
In domain-specific applications (chemical / pharmaceutical / etc research) I can concede it has its uses, but in everyday life where it’s shoved into every nook and cranny: don’t need it, don’t want it, don’t respect the people who use it.
For things like bringing dead actors back into things: let the dead stay dead.
I can’t stop see the use of AI like dices that people throw and wait to be a 7.
Llms have been here for a while, which helped a lot of people, the thing is now though the “AI” now is corporations stealing content from people instead of making it there own or creating a llm on training data that is not stolen from the general public.
Llms are fucking amazing, helps with cancer research iirc, and other things, I believe auto correct is a form of a LLM. But now capatilism wants more and more making it with stolen content which is the wrong direction they should be going.
AI all the things? Bad
AI for specific use cases? Good
I use AI probably a dozen times a week for work tasks, saving myself about 2-4 hours of work time on tasks that I know it can do easily in seconds. Simple e-mail draft? Done. Write a complex formula for excel? Easy. Generate a summary of some longer text? Yup.
It’s easy to argue that we may become dependant upon it, but that’s already true for lots of things. Would you have any idea on how to preserve food if you didn’t have a fridge? Would you have any idea on even how to get food if you didn’t have a grocery store nearby? How would you organize a party with your friends without a phone? If a computer wasn’t tracking your bank balance, how would you keep track of your money? Can you multiply 423 by 365 without using a calculator?
You’re actually making a good point that I don’t wholesale disagree with.
But the last paragraph really set me off I guess.
Personally I believe it’s important to have a somewhat granular understanding of the things we use every day, otherwise we risk becoming a slave to them.
None of us can go through life believing that it’s okay to have no skills and no ability to do anything because there’s an easier solution there for us
Because something is going to happen at some point that will take that easy solution away and then you’re fucked. What happens when all you have is a paper map, but all you’ve done is rely on these cool glowing boxes to tell you which direction to walk? You’re out in the bush with a wet phone and you sit down to cry… Because you’ve made yourself a slave and you have no idea what to do now.
I’m 50 now, and I don’t want to talk like an old man, but I can see that young people have no ability to manage their lives or do anything. There’s always a free ad supported app to do it, and then when the internet goes down they are doomed.
If you drive a car, you need to know how to change a tire and put gas in it. If you have a fridge to preserve food, yeah, you probably should understand how and why it preserves food and what to do if power goes down for a day. You should probably further understand how to preserve and ferment things because at many points in your life you’re going to get a lot of ingredients that are going to go to waste and you can eat them if you know what you’re doing.
Overall I cannot go for your advocacy of self-imposed helplessness. Every time you take an easy answer, you actually screw yourself. Most of the time it’s better to take the long road and do the hard work and figure out how to be a capable human being. Once you know how to do it without the easy solution, then you can use the easy solution. In a short metaphor, use the calculator once you know math.
Great answer, sir. Thank you