• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • merc@sh.itjust.workstoMemes@lemmy.mlThe tragedy of the commons
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    6 days ago

    The Tragedy of the Commons was popularized by a man who was anti-immigrant and pro-eugenics, and it’s not good science. The good science on it was done by Elinor Ostrom who won a Nobel-ish prize for fieldwork showing that various societies around the world had solved the issues of the governance of commons.

    The thing is, Ostrom didn’t disprove it as a concept. She just proved that with the right norms and rules in place it doesn’t inevitably lead to collapse. IMO it’s not about capitalism or communism, it’s about population. A small number of people who all know each-other can negotiate an arrangement that everyone can agree to. But, once you have thousands or millions of people, and each user of the commons knows almost none of the other users, it’s different. At that point you need a government to set rules, and law enforcement to enforce those rules. That, of course, fails when the commons is something like the world’s atmosphere and there’s no worldwide government that can set and enforce rules.



  • merc@sh.itjust.workstoMemes@lemmy.mlWeapons Of Mass Deception
    link
    fedilink
    arrow-up
    27
    arrow-down
    9
    ·
    17 days ago

    Is reading comprehension really that bad? The argument is that Iran is on the verge of having nuclear weapons. The justification for attacking them is that they need to be stopped before they cross the line.

    I’m not saying I agree with this line of reasoning, but the clear idea is that Iran doesn’t currently have nuclear weapons.





  • I think “I don’t know” might sometimes be found in the training data. But, I’m sure they optimize the meta-prompts so that it never shows up in a response to people. While it might be the “honest” answer a lot of the time, the makers of these LLMs seem to believe that people would prefer confident bullshit that’s wrong over “I don’t know”.


  • No, I’m sure you’re wrong. There’s a certain cheerful confidence that you get from every LLM response. It’s this upbeat “can do attitude” brimming with confidence mixed with subservience that is definitely not the standard way people communicate on the Internet, let alone Stack Overflow. Sure, sometimes people answering questions are overconfident, but it’s often an arrogant kind of confidence, not a subservient kind of confidence you get from LLMs.

    I don’t think an LLM can sound like it lacks in confidence for the right reasons, but it can definitely pull off lack of confidence if it’s prompted correctly. To actually lack confidence it would have to have an understanding of the situation. But, to imitate lack of confidence all it would need to do is draw on all the training data it has where the response to a question is one where someone lacks confidence.

    Similarly, it’s not like it actually has confidence normally. It’s just been trained / meta-prompted to emit an answer in a style that mimics confidence.







  • No, I don’t think so. It’s true that many of the earliest programmers were female, but there were very few of them, and that was a long time ago.

    In a way, Ada Lovelace was the first programmer, but she never even touched a computer. The first programmers who did anything similar to today’s programming were from Grace Hopper’s era in the 1950s.

    In the late 1960s there were a lot of women working in computer programming relative to the size of the field, but the field was still tiny, only tens of thousands globally. By the 1970s it was already a majority male profession so the number of women was already down to only about 22.5%.

    That means that for 50 years, a time when the number of programmers increased by orders of magnitude, the programmers were mostly male.


  • Saying we can solve the fidelity problem is like Jules Verne in 1867 saying we could get to the moon with a cannon because of “what progress artillery science has made during the last few years”.

    Do rockets count as artillery science? The first rockets basically served the same purpose as artillery, and were operated by the same army groups. The innovation was to attach the propellant to the explosive charge and have it explode gradually rather than suddenly. Even the shape of a rocket is a refinement of the shape of an artillery shell.

    Verne wasn’t able to imagine artillery without the cannon barrel, but I’d argue he was right. It was basically “artillery science” that got humankind to the moon. The first “rocket artillery” were the V1 and V2 bombs. You could probably argue that the V1 wasn’t really artillery, and that’s fair, but also it wasn’t what the moon missions were based on. The moon missions were a refinement of the V2, which was a warhead delivered by launching something on a ballistic path.

    As for generative AI, it doesn’t have zero fidelity, it just has relatively low fidelity. What makes that worse is that it’s trained to sound extremely confident, so people trust it when they shouldn’t.

    Personally, I think it will take a very long time, if ever, before we get to the stage where “vibe coding” actually works well. OTOH, a more reasonable goal is a GenAI tool that you basically treat as an intern. You don’t trust it, you expect it to do bone-headed things frequently, but sometimes it can do grunt work for you. As long as you carefully check over its work, it might save you some time/effort. But, I’m not sure if that can be done at a price that makes sense. So far the GenAI companies are setting fire to money in the hope that there will eventually be a workable business model.


  • If you use it basically like you’d use an intern or junior dev, it could be useful.

    You wouldn’t allow them to check anything in themselves. You wouldn’t trust anything they did without carefully reading it over. You’d have to expect that they’d occasionally completely misunderstand the request. You’d treat them as someone completely lacking in common sense.

    If, with all those caveats, you can get this assistance for free or nearly free, it might be worth it. But, right now, all the AI companies are basically setting money on fire to try to drive demand. If people had to pay enough that the AI companies were able to break even, it might be so expensive it was no longer worth it.



  • Yeah, I love that one.

    “Try” is too hopeful. “fuck_around” makes it clear that you know what you’re doing is dangerous but you’re going to do it anyhow. I know that in some languages wrapping a lot of code in exception blocks is the norm, but I don’t like that. I think it should be something you only use rarely, and when you do it’s because you know you’re doing something that’s not safe in some way.

    “Catch” has never satisfied me. I mean, I know what it does, but it doesn’t seem to relate to “try”. Really, if “try” doesn’t succeed, the corresponding block should be “fail”. But, then you’d have the confusion of a block named “fail”, which isn’t ideal. But “find_out” pairs perfectly with “fuck_around” and makes it clear that if you got there it’s because something went wrong.

    I also like “yeet”. Partly it’s fun for comedic value. But, it’s also good because “throw” feels like a casual game of catch in the park. “Yeet” feels more like it’s out of control, if you hit a “throw” your code isn’t carefully handing off its state, it’s hitting the eject button and hoping for the best. You hope there’s an exception handler higher up the stack that will do the right thing, but it also might just bubble all the way up to the top and spit out a nasty exception for the user.