• @Allero@lemmy.today
    link
    fedilink
    English
    6
    edit-2
    1 day ago

    There is already GPT4All.

    Convenient graphical interface, any model you like (for Llama fans - of course it’s there), fully local, easy to opt in or out of data collection, and no fuss to install - it’s just a Linux/Windows/MacOS app.

    For Linux folks, it is also available as flatpak for your convenience.

    • @Squizzy@lemmy.world
      link
      fedilink
      English
      17 hours ago

      So it doesnt require internet access at all? I would only use these on a disconnected part of my network.

      • @Allero@lemmy.today
        link
        fedilink
        English
        16 hours ago

        Yes, it works perfectly well without Internet. Tried it both on physically disconnected PC and laptop in airplane mode.

  • @Obelix@feddit.org
    link
    fedilink
    English
    222 hours ago

    Google hosting their shit on Microsofts servers and telling you to sideload and not using their own software distribution method for their own OS is kind of crazy if you think about it

  • @KeenFlame@feddit.nu
    link
    fedilink
    English
    31 day ago

    Wonder what this has over its competitors, I hesitate to think they released this for fun though

      • AmbiguousProps
        link
        fedilink
        English
        233 days ago

        That’s fair, but I think I’d rather self host an Ollama server and connect to it with an Android client in that case. Much better performance.

        • @OhVenus_Baby@lemmy.ml
          link
          fedilink
          English
          22 days ago

          How is Ollama compared to GPT models? I used the paid tier for work and I’m curious how this stacks up.

          • AmbiguousProps
            link
            fedilink
            English
            12 days ago

            It’s decent, with the deepseek model anyway. It’s not as fast and has a lower parameter count though. You might just need to try it and see if it fits your needs or not.

        • Greg Clarke
          link
          fedilink
          English
          43 days ago

          Yes, that’s my setup. But this will be useful for cases where internet connection is not reliable

        • Greg Clarke
          link
          fedilink
          English
          33 days ago

          Has this actually been done? If so, I assume it would only be able to use the CPU

          • @Euphoma@lemmy.ml
            link
            fedilink
            English
            72 days ago

            Yeah I have it in termux. Ollama is in the package repos for termux. The speed it generates does feel like cpu speed but idk

    • th3dogcow
      link
      fedilink
      English
      43 days ago

      Didn’t know about this. Checking it out now, thanks!

    • Rhaedas
      link
      fedilink
      473 days ago

      And unmonitored? Don’t trust anything from Google anymore.

      What makes this better than Ollama?

      • @Angelusz@lemmy.world
        link
        fedilink
        English
        253 days ago

        Quote: “all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images, explore prompts, and more!”

        So you can download it and set the device to airplane mode, never go online again - they won’t be able to monitor anything, even if there’s code for that included.

        • melroy
          link
          fedilink
          123 days ago

          never go online again - they won’t be able to monitor anything, even if there’s code for that included.

          Sounds counter-intuitive on a smart phone where you most likely want to be online again at some point in time.

          • @Angelusz@lemmy.world
            link
            fedilink
            English
            22 days ago

            So trust them. If you don’t and want to use this, buy a separate device for it, or VM.

            Can’t? This is not for you.

            • melroy
              link
              fedilink
              22 days ago

              I won’t gonna use my smartphone as a local llm machine.

    • @ofcourse@lemmy.ml
      link
      fedilink
      English
      53 days ago

      Censoring is model dependent so you can select one of the models without the guardrails.