• 0 Posts
  • 136 Comments
Joined 2 years ago
cake
Cake day: June 28th, 2023

help-circle
  • Maybe it’s also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it’s expected to try and try again with different questions until one correct answer comes out and then use that one to “evangelize” about the virtues of AI.




  • I believe that promptfondlers and boosters are particularly good at “kissing up”, which may help their careers even during an AI winter. This something we have to be prepared for, sadly. However, some of those people could still be in for a rude awakening if someone actually pays attention to the quality and usefulness of their work.


  • By the way, I know there is an argument that “low-skilled” jobs should not be eliminated because there are supposedly people who are unable to perform more demanding and varied tasks. But I believe this is partly a myth that was invented as a result of the industrial revolution, because back then, a very large number of people were needed to do such jobs. In addition, this doesn’t even address the fact that many of these jobs require some type of specific skill anyway (which isn’t getting rewarded appropriately, though).

    The best example to this day are immigrants who have to do “low-skilled” jobs even though they possess academic degrees from their home countries. In such cases, I believe that automation could even lead to the creation of more jobs that match their true skill levels.

    Another problem is that, especially in countries like the US, low-wage jobs are used as a substitute for a reasonable social safety net.

    AI (especially large language models) is, of course, a separate issue, because it is claimed that AI could replace highly skilled and creative workers, which, on the one hand, is used as a constant threat and, on the other hand, is not even remotely true according to current experience.


  • In my experience, the large self-service kiosks at McDonald’s are pretty decent (unless they crash, which happens too often). Many people (including myself) use them voluntarily, because if it is nice to have more control of and visual information about your order (including prices, product images, nutritional information, allergens etc.). You don’t even need to wait in line anymore if their staff brings your order directly to your table. You don’t need to use any tricks to speak to a human either, because you can always go to the counter and order there instead. However, this only works because the kiosks are customer-friendly enough that you don’t have to force most people to use them.

    I know that even those kiosks probably aren’t great in the sense that they may replace some jobs, at least over the short-term. However, if customers truly like something, this might still lead to more demand and thus more jobs in other areas (people who carry your order to your table, people who prepare the food itself, people who code those apps - unless they are truly “vibe-coded”, maintain the kiosks, design their content etc.).

    However, the current “breed” of AI bots is a far cry away from even that, in my impression. They are really primarily used as a threat to “uppity” labor, and who cares about the customers?









  • To me, in terms of the chatbot’s role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn’t just support this man’s delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further “incriminating” various people including his mother, whom he eventually killed. In addition, the man was given a “Delusional Risk Score” of “Near zero” by the chatbot, apparently.

    On the other hand, I’m sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.




  • HedyLtoTechTakesExcel COPILOT: make up spreadsheet numbers with AI!
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    30 days ago

    The most useful thing would be if mid-level users had a system where they could just go “I want these cells to be filled with the second word of the info of the cell next to it”,

    In such a case, it would also be very useful if the AI would ask for clarification first, such as: “By ‘the cell next to it’, you mean the cells in column No. xxx, is that correct?”

    Now I wonder whether AI chatbots typically do that. In my (limited) experience, they often don’t. They tend to hallucinate an answer rather than ask for clarification, and if the answer is wrong, I’m supposedly to blame because I prompted them wrong.



  • This week I heard that supposedly, all of those failed AI initiatives did in fact deliver the promised 40% productivity gains, but the companies (supposedly) didn’t reap any returns “because they failed to make the necessary organizational changes” (which happens all the time, supposedly).

    Is this the new “official” talking point?

    Also, according to the university professor (!) who held the talk, the blockchain and web3 are soon going to solve the problems related to AI-generated deepfakes. They were dead serious, apparently. And someone paid them to hold that talk.