Has been a while since AI were introduced into the daily basis of the users around all the internet. When it firstly came I was curious yeah like everyone and tried some prompts to see “what this thing can do”, then, I never, ever, used AI again, because I really never saw it like something necesary, we had automatic systems already, so, the time keep moving to me until this day, when I realized something: how people is dependent of this shit. I mean, REALLY is dependent, and then they go like “I only used it for school 😢” like, are you serious dude? Do you leave your future to an algorithm? Coming back with my question, years have passed, I do think we all have an opinion more developed about AI, what do you think? Fuck it and use it anyways? If that is the case, why blame companys to make more accessible it’s use? Like microsoft putting copilot even in notepad. “Microsoft just wants to compile your data.” Isn’t LLM about that? Why blame them if you are going to use the same problem with different flavor? Not defending Microsoft here, I’m only using it like an example, change it for the company of your own preference.
Like every new technology that is hailed as changing everything it is settling into a small handful of niches.
I use a service called Consensus which will unearth relevant academic papers to a specific clinical question, in the past this could be incredibly time consuming.
I also sometimes use a service called Heidi that uses voice recognition to document patient encounters, its quite good for a specific type of visit that suits a rigid template but 90% of my consults i have no idea why they are coming in and for those i find it not much better than writing notes myself.
Obviously for creative work it is near useless.
I’m a fan generally of LLMs for work, but only if you’re already an expert or well versed at all in whatever you’re doing with the model because it isn’t trust worthy.
If you’re using a model to code you better already know how that language works and how to debug it because the AI will just lie.
If you need it to make an SOP then you better already have an idea for what that operation looks like because it will just lie.
It speeds up the work process by instantly doing the tedious parts of jobs, but it’s worthless if you can’t verify the accuracy. And I’m worried people don’t care about the accuracy.
I’m tired of people’s shit getting stolen, and I’m tired of all the AI bullshit being thrown in my face.
I find it a little bit useful to supplement a search engine at work as a dev but it can’t write code properly yet.
I can see it doing a lot of harm in the ways has been implemented unethically, and in some cases we don’t have legal resolution on whether it’s “legal” but I think any reasonable person knows that taking an original artist’s work, and making a computer generate counterfeits is not really correct.
I think there is going to be a massive culling of people who are charlatans anyways, and whose artistic output is meritless. See 98% of webcomics. Most pop music. Those are already producing output that is so flavorless and bland it might as well have come from AI model. Those people are going to have to find real jobs that they are good at.
I think the worst of what AI is going to bring is not even in making art, music, video, shit like that… It’s going to be that dark pattern stuff where human behavioral patterns and psychology is meticulously analyzed and used against us. Industries that target human frailties are going to use these heavily.
Effective communication will become a quaint memory of the past that seniors rant about.
Fun toy to play with, but everytime I tried to use is for real work, it ended up so bad that I spent more time than doing it properly from scratch
It was fun for a time when their API access was free so some game developers put llms into their games. I liked being able to communicate with my ships computer, but quickly saw how flawed it was.
“Computer, can you tell me what system we’re in?”
“Sure, we’re in the Goober system.”
“But my map says we’re in Tweedledum.”
“Well it appears that your map is wrong.” Lol
I’m much more concerned about the future when “AGI” is actually useful and implemented into society. “We” (the ownership class) cannot accept anything other than the standard form of ownership. Those that created the AGI own the AGI and “rent” it to those that will “employ” the AGI. Pair that with more capable robotics being currently developed and there will be very little need for people to work most jobs. Because of the concept of ownership we will not let go of, if you can afford to live then you just die. There will be no “redistribution” to help those who cannot find work. We will start hearing more and more about “we don’t need so many people, X billion is too many. There isn’t enough work for them to support themselves.” Not a fun future…
- I find it useful for work (I am a software developer/tester).
- I think it’s about as good as it’s ever going to get.
- I believe it is not ever going to be profitable and the benefits are not worth reopening nuclear and coal power plants.
- If US courts rule that training AI with copyrighted materials is fair use, then I will probably stop paying for content and start pirating it again.
I want actual AI, and not even necessarily for anything other than answering the question of “can we make a sentient being that isn’t human?”
What is being sold as AI isn’t anything cool, or special, or even super useful outside of extremely specific tasks that are certainly not things that can be sold to the general public.
Except for a very few niche use cases (subtitles for hearing-impaired) almost every aspect of it (techbros, capitalism, art-theft, energy-consumption, erosion of what is true etc etc) is awful and I’ll not touch it with a stick.
In domain-specific applications (chemical / pharmaceutical / etc research) I can concede it has its uses, but in everyday life where it’s shoved into every nook and cranny: don’t need it, don’t want it, don’t respect the people who use it.
For things like bringing dead actors back into things: let the dead stay dead.
I can’t stop see the use of AI like dices that people throw and wait to be a 7.
The llms have impressive mathematics but can it cure cancer or create world peace? No. Can it confuse people by pretending to be human? Yes. Put all that compute to work solving problems instead of writing emails or basic code or customer service and I’ll care. I hear that AlphaFold is useful. I want to hear more about the useful machine learnimg.
I have less to say about the tech side than I do about this whole forced mass adoption of LLMs and how I’ve seen people react to it doing things.
I agree that they’re unethically made by stealing data. That’s indisputable. What I fail to grasp is what the purpose of hating a technology is. Blame and responsibility are weird concepts. I’m not knowledgeable in philosophy or anything related to this. What I can tell, however, is that hating on the tech itself distracts people from blaming those actually responsible, the humans doing the enshittification. The billionaires, dictators…
(tangent) and I’d go as far as to say anyone who politically represents more people than they know personally is not the type of politician that should be allowed. Same if they have enough power to violence a lot of people. But this is just my inner anarchist speculating how an ethical society with limited hierarchy might work.
“What I can tell, however, is that hating on the tech itself distracts people from blaming those actually responsible, the humans doing the enshittification. The billionaires, dictators…”
^– SattaRIP^
That’s something I’ve been trying to convince people of that I converse with about LLMs and similar generative technology. I’ve met so many people that just throw a big blanket of hate right over the entire concept of the technology and I just find it so bizarre. Criticize the people and corporations using the tech irresponsibly! It’s like a mass redirection of what and who is really to blame. Which I think is partially because “AI” is something that sort of anthropomorphizes itself to a large portion of society and most people think the “personality within” the technology is responsible for the perceived misdeeds.
I figure when all is said and done and historians and researchers look back on this time, there will be a lot to learn about human behavior that we likely have little grasp of at the moment.
It’s just like any big technological breakthrough. Some people will lose their jobs, jobs that don’t currently exist will be created, and while it’ll create acute problems for some people, the average quality of life will go up. Some people will use it for good things, some people will use it for bad things.
I’m a tech guy, I like it a lot. Before COVID, I used to teach software dev, including neural networks, so seeing this stuff gradually reach the point it has now has been incredible.
That said, at the moment, it’s being put into all kinds of use-cases that don’t need it. I think that’s more harmful than not. There’s no need for Copilot in Notepad.
We have numerous AI tools where I work, but it hasn’t cost anyone their job - they just make life easier for the people who use them. I think too many companies see it as a way to reduce overheads instead of increasing output capability, and all this does is create a negative sentiment towards AI.
Llms have been here for a while, which helped a lot of people, the thing is now though the “AI” now is corporations stealing content from people instead of making it there own or creating a llm on training data that is not stolen from the general public.
Llms are fucking amazing, helps with cancer research iirc, and other things, I believe auto correct is a form of a LLM. But now capatilism wants more and more making it with stolen content which is the wrong direction they should be going.