• @andallthat@lemmy.world
      link
      fedilink
      7
      edit-2
      15 days ago

      Machine Learning Models have existed for a long time. They are at their core predictors: you give them data, you carefully tweak the model’s parameters for a long time and you can finally train a model that can make predictions in a specific domain. That way you can have a model trained specifically to identify patterns that look like cancer on medical imaging or another one (like in your example) to predict a protein’s structure.

      LLMs are ML models too, but they are trained on language. They learn to identify patterns in human language and predict long pieces of text that are similar to those language patterns. They also accept input in natural language.

      The hype consists in slapping a new “AI” marketing label onto all of Machine Learning, mixing LLMs and other types of models, and creating the delusion that predicting a protein’s structure was done by people at Google casually throwing prompts at Gemini.

      And as these LLMs are exceptionally power-hungry and super expensive (turns out that predicting human language based on a whole internet’s worth of training requires incredibly complex models), that hype is to gather all the needed trillions of investment. GenAI is not the whole of Machine Learning and saying “Copilot is not worth the cost of the energy that’s needed to power it” doesn’t mean creating obstacles to ML used for cancer research.

      • @DragonSidedD@lemmy.ml
        link
        fedilink
        1
        edit-2
        15 days ago

        AlphaFold is built on a transformer architecture. It’s essentially an LLM just trained on genetic / protein language instead of Reddit posts