If so, I’d like to know about that questions:

  • Do you use an code autocomplete AI or type in a chat?
  • Do you consider environment damage that use of AIs can cause?
  • What type of AI do you use?
  • Usually, what do you ask AIs to do?
  • BartyDeCanter@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    ·
    3 小时前

    I don’t use AI when I’m learning a new system, framework or language because I won’t actually learn it.

    I don’t use AI when I need to make a small change on a system I know well, because I can make it just as fast and have better insight into how it all works.

    I don’t use AI when I’m developing a new system because I want to understand how it works and writing the code helps me refine my ideas.

    I don’t use AI when I’m working on something with security or copyright concerns.

    Basically, the only time I use AI is when I’m making a quick throw away script in a language I’m not fluent in.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    3 小时前

    I use continue in VSCode hooked to ollama or mistrial. Sometimes I just ask a chat to “make a script/config that does <my MVP of the project, maybe even less>”.

    How much I use depends on how little I am invested. My rule is I try to correct a bad output ONCE. I cannot argue it into fucking getting it right.

    I prefer net new code and add this feature. Ironically good refactoring goes a long way. The less it has to adjust the better, and less I have to review the better.

  • Scrath@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    4 小时前

    I mostly dislike using AI to code. The one exception I recently got into was when I was fighting with a python script and didn’t understand why it was behaving the way it did. I used AI for possible causes and pretty quickly managed to fix it. Sometimes it’s just nice to have some possible causes for a bug listed so you can check them out

  • jasory@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    6 小时前

    The only code generation assistance I use is in the form of compilers. For fun I tried to use the free version of Chatgpt to replicate an algorithm I recently designed and after about half-hr I could only get it to produce the same trivial algorithms you find on blog posts even with feeding it much more sophisticated approaches.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    5 小时前

    Nobody uses “AI” because it doesn’t exist.

    Nobody in this thread is talking about any program that’s remotely “intelligent”.

    As far as technologies falsely hyped as “AI”, I use google’s search summaries. It’s usually quicker than clicking the actual sources, but I have that option as needed.

  • dumples@midwest.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 小时前

    I am a data scientist and we use databricks which has copilot (I think) installed by default. So with this we have an autocomplete which I use the most because it can do some of the tedious steps for an analysis if I write good comments which I do anyhow. This is around 50% accurate with it being the most accurate for simple mindless things or getting the name of things correct.

    There is code generating block tool that I never use. There is also a something that troubleshoots and diagnosis any error. Those are mostly useless but has been good to finding missing commas and other simple things. Their suggestions sometimes are terrible enough that I mostly ignore this.

    We have a Copilot bot as part of our Github (I don’t know is this standard now?) that I actually enjoy and has uses. It writes up great summarizes of what code was commited which has a great format and seems almost 100% accurate for me. Most importantly it has a great spellchecker as part of their suggestions. I am a terrible speller and never double check names so it can fix them both in the notes and in my code (It fixes it everywhere in the code which is nice). The rest of the suggestions are okay. There are some that are useful but some that are way off or overengineered for what I am doing. This I like because it just comes in at the end of my process and I can choose to accept or deny.

  • DarkAri@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    6 小时前

    -Type in chat, mostly for reference, snippets, help debugging, and questions about libraries or something.

    -Not as much as I should

    -ChatGPT is my favorite, I have been a user since day one and never tried any other ones.

    -As someone who isn’t a leet tier programmer to just help, and write code snippets although I often have to modify it and do things myself as well because something’s the AI will fail at. Also after a certain level of complexity it starts to struggle. Better for snippets and examples but you have to integrate it yourself often times if you are creating something unique that the AI can’t more or less just copy and paste and translate.

  • mayorchid@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    6 小时前

    I use whatever line completion is built into JetBrains out of the box. Other than that, no AI whatsoever.

    Only about 10% of my time at work is actually spent writing code. At least double that time is spent reading code, and the rest is documentation, coordination, and communication work that depends on precise understanding of the code I’m responsible for. If I let AI write code, maybe (doubtfully) that would save a little time out of the 10%, but it would cost me dearly in the other two categories. The code I write by hand is minimal, clear, and easy to understand, and I understand it better because I wrote it myself. I understand all the code around it, too.

    If you ask me, AI code generation is based entirely on non-programmers’ incorrect understanding of what programming is.

  • melfie@lemy.lol
    link
    fedilink
    arrow-up
    3
    ·
    8 小时前

    I use Copilot with mostly Claude Sonnet 4.5. Don’t use the autocomplete because it’s useless and annoying. I mostly chat with it, give it specific instructions for how to implement small changes, carefully review its code, make it fix anything I don’t like, then have it write test scripts with curl calling APIs and other methods to exercise the system in a staging environment and output data so I can manually verify it and make sure all of its changes are working as expected in case I overlooked something in the automated tests.

    As far as environmental impact, training is where most of the impact occurs, and inference, RAG, querying vector databases, etc. is fairly minimal AFAIK.

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    8 小时前

    Just intellisense and other language servers. I remember when Microsoft was boasting about how much of their code was generated by intellisense. Now whenever I hear them hype how much ai written code they use I am reminded of it. It’s not an llm but is still a type of ai.

  • Artaca@lemdro.id
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 小时前

    Architect here, not a programmer. I’ve taken Python classes but was never good enough too use it regularly. Using Gemini, I’ve been able to work through creating half a dozen scripts for automating tedious tasks and optimizing models/drawings. I’m hoping to improve myself so I can eventually make use of it for even more useful things, but as a start it’s been awesome. Not perfect, it makes a lot of mistakes, but I’ve been able to work with it to get things right.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 小时前

    Visual Studio provides some kind of AI even without Copilot.

    Inline (single line) completions - I not always but regularly find quite useful

    Repeated edits continuation - I haven’t seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they’re not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can’t be sure it didn’t change any of those lines.

    Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].

    In my company we’re still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don’t have practical experience regarding any analysis, generating, or chat functionality with project context. I’m skeptical but somewhat interested.

    I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.

    I use Phind and more recently more ChatGPT for research/search queries. I’m mindful of the type of queries I use and which provider or service I use. In general, I’m a friend of ref docs, which is the only definite source after all. I’m aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn’t seem capable to respond to what I am looking for.