

That’s why Elmu wants to go to Mars.
I’m also on Mastodon as https://hachyderm.io/@BoydStephenSmithJr .
That’s why Elmu wants to go to Mars.
Honestly, I don’t like either programmability approach (vimscript/lua OR emacs-lisp), but I’ll probably just stick with neovim, because when I’m on a system without my configuration, I’ve more productive there, and I don’t want to learn enough emacs-lisp “APIs” to reproduce my somewhat small vim configuration.
So, I think probably everyone in the thread is “correct”, but you are actually talking past one another.
I think the JS behavior is a bad design choice, but it is well documented and consistent across implementations.
I think it’s less about type system, and more about lack of a separate compilation step.
With a compilation step, you can have error messages that developers see, but users don’t. (Hopefully, these errors enable the developers to reduce the errors that users see, and just generally improve the UX, but that’s NOT guaranteed.)
Without a compilation step, you have to assign some semantics to whatever random source string your interpreter gets. And, while you can certainly make that an error, that would rarely be helpful for the user. JS instead made the choice to, as much as possible, avoid error semantics in favor of silent coercions, conversions, and conflations in order to make every attempt to not “error-out” on the user.
It would be a very painful decade indeed to now change the semantics for some JS source text.
Purescript is a great option. Typescript is okay. You could also introduce a JS-to-JS “compilation” step that DID reject (or at least warn the developer) for source text that “should” be given an error semantic, but I don’t know an “off-the-shelf” approach for that – other than JSLint.
(.)
is a valid expression in Haskell. Normally it is the prefix form of the infix operator .
that does function
composition. (.) (2*) (1+) 3
= ((2*) . (1+)) 3
= 2 * (1 + 3)
= 8
.
But, the most common use of the word “boob” in my experience in Haskell is the “boobs operator”: (.)(.)
. It’s usage in Haskell is limited (tho valid), but it’s appearance in racy ASCII art predates even the first versions on Haskell.
Oddly enough, in Haskell (as defined by the report), length is monomorphic, so it just doesn’t work on tuples (type error).
Due to the way kinds (types of types) work in Haskell, Foldable instances can only operate over (i.e. length only counts) elements of the last/final type argument. So, for (,) it only counts the second part, which is always there exactly once. If you provided a Foldable for (,) it would also have length of 1.
This is my favorite language: GHC Haskell
GHC Haskell:
GHCi> length (2, "foo")
1
Only if that browser somehow becomes overwhelmingly popular in a market segment BEFORE it gets JS support.
The run time still has to assign a semantics to it, even if that semantics is a fatal error. In a compiled language, you can prevent the run time from having to assign any semantics by eliminating the error condition at compile time.
Python also has no separate compilation step and yet it did not adopt this philosophy
Yes. It did. It didn’t assign exactly the same semantics, but it DOES assign a run time semantic to min()
.
JS is the machine code of the web. Fewer and fewer people might write it directly, but it will live as long as the web platform does.
Not having a separate compilation step absolutely affects error handling. With a compilation step, you can have errors that will only be seen by and must be address by a developer prior to run time. Without one, the run time system, must assign some semantics to the source code, no matter how erroneous it is.
No matter what advisory “signature” you imagine for a function, JS has to assign some run time semantics to that function being called incorrectly. Compiled languages do not have to provide a run time semantics to for signatures that can be statically checked.
All functions built with function name(args) { body }
syntax have a length based on the form of args
. Other ways to create functions might set length (I’m not sure). Most of the functions provided by the runtime environment do have a length, usually based on the number of “required” arguments.
So, the language isn’t compiled (or wasn’t originally) so they couldn’t make min()
be an error that only a developer saw, it has to be something that the runtime on the end-user system dealt with. So, it had to be assigned some value. Under those restrictions, it is the most mathematically sound value. It makes miniumum-exactly-2(x, min(<…>)) be exactly the same as min(x, <…>), even when the “<…>” has no values.
As a developer, I see a lot of value in static analysis, including refusing to generate output for sufficiently erroneous results of static analysis, so I don’t like using JS, and the language that I tinker with will definitely have a separate compilation step and reject the equivalent of min()
. But, if I HAD to assign something like that a value, it probably would be a representation of infinity, if we had one (probably will due to IEEE floats).
HTH
“Since the data is incomplete, we decided to make shit up”
Sounds like those statistics output would the heavily biased by whatever process you were using to turn names into genders. In short, a bad idea.
It’s like intraoffice e-mail.