Has been a while since AI were introduced into the daily basis of the users around all the internet. When it firstly came I was curious yeah like everyone and tried some prompts to see “what this thing can do”, then, I never, ever, used AI again, because I really never saw it like something necesary, we had automatic systems already, so, the time keep moving to me until this day, when I realized something: how people is dependent of this shit. I mean, REALLY is dependent, and then they go like “I only used it for school 😢” like, are you serious dude? Do you leave your future to an algorithm? Coming back with my question, years have passed, I do think we all have an opinion more developed about AI, what do you think? Fuck it and use it anyways? If that is the case, why blame companys to make more accessible it’s use? Like microsoft putting copilot even in notepad. “Microsoft just wants to compile your data.” Isn’t LLM about that? Why blame them if you are going to use the same problem with different flavor? Not defending Microsoft here, I’m only using it like an example, change it for the company of your own preference.
I have less to say about the tech side than I do about this whole forced mass adoption of LLMs and how I’ve seen people react to it doing things.
I agree that they’re unethically made by stealing data. That’s indisputable. What I fail to grasp is what the purpose of hating a technology is. Blame and responsibility are weird concepts. I’m not knowledgeable in philosophy or anything related to this. What I can tell, however, is that hating on the tech itself distracts people from blaming those actually responsible, the humans doing the enshittification. The billionaires, dictators…
(tangent) and I’d go as far as to say anyone who politically represents more people than they know personally is not the type of politician that should be allowed. Same if they have enough power to violence a lot of people. But this is just my inner anarchist speculating how an ethical society with limited hierarchy might work.
That’s something I’ve been trying to convince people of that I converse with about LLMs and similar generative technology. I’ve met so many people that just throw a big blanket of hate right over the entire concept of the technology and I just find it so bizarre. Criticize the people and corporations using the tech irresponsibly! It’s like a mass redirection of what and who is really to blame. Which I think is partially because “AI” is something that sort of anthropomorphizes itself to a large portion of society and most people think the “personality within” the technology is responsible for the perceived misdeeds.
I figure when all is said and done and historians and researchers look back on this time, there will be a lot to learn about human behavior that we likely have little grasp of at the moment.