Disclaimer: these tweets aren’t real.

  • Vinny_93@lemmy.world
    link
    fedilink
    arrow-up
    93
    arrow-down
    1
    ·
    2 days ago

    It’s just prompt engineering for coding. Let an AI dump a bunch of code for you, debug until it no longer errors, pull request and repeat next sprint.

    5% of the time, it works every time

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      The critical detail being that you don’t actually know what’s inside (and it’s definitely bad). Just using LLM assistance for a your boilerplate code doesn’t count.

    • troed@fedia.io
      link
      fedilink
      arrow-up
      27
      ·
      2 days ago

      Brought to you by (us) security researchers who will happily come in and sort out your security issues later. For a very hefty hourly fee.

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        1 day ago

        who will happily come in and sort out your security issues later

        I really doubt anybody will be happy about it, even after considering the size of the fees. And also, you have a very high estimation of the capacity of those people to notice they have to call you, I really doubt it’s deserved.

      • Flatfire@lemmy.ca
        link
        fedilink
        arrow-up
        22
        ·
        1 day ago

        Yeah, because at least a decent 3rd party might hand you documentation and have the sense to build something consistent or maintainable. AI has a limited context scope and frequently suffers a type of short term memory loss that results in repeated work or variations in work that confuse the end result.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Yeah, unlike an outsourcing outfit an AI company won’t take the fall when given shit requirement and shit pay they deliver shit work.