Lena@gregtech.eu to Programmer Humor@programming.devEnglish · 7 days ago"Source code file"gregtech.euimagemessage-square240linkfedilinkarrow-up1866arrow-down110
arrow-up1856arrow-down1image"Source code file"gregtech.euLena@gregtech.eu to Programmer Humor@programming.devEnglish · 7 days agomessage-square240linkfedilink
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up30·7 days agoHey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
minus-squareZetta@mander.xyzlinkfedilinkarrow-up2·edit-26 days agoPerfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly /s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up1·6 days ago250,000 lines is way more than 250,000 tokens, so even that context is too small.
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up26·7 days agoJust really fuck up this shit. I want it unrecognizable!
Hey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
Perfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly
/s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
250,000 lines is way more than 250,000 tokens, so even that context is too small.
Just really fuck up this shit. I want it unrecognizable!