

Certainly possible
I’m also genuinely a little bit alarmed looking back now at my pre-LLM code and seeing the quality vs. the with-LLM code.
Certainly possible
I’m also genuinely a little bit alarmed looking back now at my pre-LLM code and seeing the quality vs. the with-LLM code.
IDK, I just popped open a project from 10 years ago and it’s perfectly clean, it’s actually better than some of my modern code because it’s not LLM-ified to save time.
I think it has a lot more to do with whether it was made in that “kind of crappy IDK what I’m doing” phase of programming. Some of your old stuff is going to be in that category sure. As long as you’re out of that, however long it took you to get there or however far away it was in time, your code should be good.
Yeah, that sounds about right lol. All my python projects for years were basically writing C in python. It actually took me all the way up until I got to look at the code ChatGPT likes to generate that I learned idiomatic python. My first database project was based on the Unix philosophy, where everything was strings (no ID keys, no normalization), because Unix is good.
The client wasn’t happy when they looked at the DB code lmao. Whatever, it worked, they still paid us and I didn’t do it again.
Am I the only one who likes looking at my old code? Generally I feel like it’s alright.
Usually the first project when I’m learning how to use some new language or environment is super-shitty. I can tell it’s very bad, usually I don’t like interacting with it if I have to make changes, but it’s still not overly painful. It’s just bad code. And that one exception aside I generally like looking at my code.
Yeah. I feel like in a few years when literally nothing works or is maintainable, people are going to have a resurgent realization of the importance of reliability in software design, that just throwing bodies and lines of code at the problem builds up a shaky structure that just isn’t workable anymore once it grows beyond a certain size.
We used to know that, and somehow we forgot.
Yeah. I have no idea what the answer is, just describing the nature of the issue. I come from the days when you would maybe import like one library to do something special like .png reading or something, and you basically did all the rest yourself. The way programming gets done today is wild to me.
I sort of have a suspicion that there is some mathematical proof that, as soon as it becomes quick and easy to import an arbitrary number of dependencies into your project along with their dependencies, the size of the average project’s dependencies starts to follow an exponential growth curve increasing every year, without limit.
I notice that this stuff didn’t happen with package managers + autoconf/automake. It was only once it became super-trivial to do from the programmer side, that the growth curve started. I’ve literally had trivial projects pull in thousands of dependencies recursively, because it’s easier to do that than to take literally one hour implementing a little modified-file watcher function or something.
I thought I had it worked out, how to sort of strike a balance so I can keep my focus intact and let it be helpful without wasting time constantly correcting its stuff or shying away from actually paying attention to the code. But I think my strategy of “let the LLM generate a bunch of vomit to get things started and then take on the correct and augmentation from a human standpoint” has let the overall designs at a high level get a lot sloppier than they used to be.
Yeah, you might be right, it might be time to just set the stuff aside except for very specialized uses.