Hacker Newsnew | past | comments | ask | show | jobs | submit | kbolino's commentslogin

Both Borgo and now Lisette seem to act as though (T, error) returns are equivalent to a Result<T, error> sum type, but this is not semantically valid in all cases. The io.Reader interface's Read method, for example, specifies not only that (n!=0, io.EOF) is a valid return pattern, but moreover that it is not even an error condition, just a terminal condition. If you treat the two return values as mutually exclusive, you either can't see that you're supposed to stop reading, or you can't see that some number of valid bytes were placed into the buffer. This is probably well known enough to be handled specifically, but other libraries have been known to make creative use of the non-exclusivity in multiple return values too.

You are right, and thank you for pointing this out. I've opened an issue:

https://github.com/ivov/lisette/issues/12

I have a few approaches in mind and will be addressing this soon.


I gave Lisette a run today. I really like it, its a clear improvement to Go.

Here a few things that i noticed.

- Third party Go code support (like go-chi) is a absolute must have. This is THE feature that will possibly sky-rocket Lisette adoption. So something like stubs etc, maybe something like ReScript has for its JS interop (https://rescript-lang.org/docs/manual/external). The cli tool could probably infer and make these stubs semi-easily, as the go typesystem is kind of simple.

- The HM claim did confuse me. It does not infer when matching on an Enum, but i have to manually type the enum type to get the compiler to agree on what is being matched on. Note, this is a HARD problem (ocaml does this probably the best), and maybe outside the scope of Lisette, but maybe tweak the docs if this is the case. (eg. infers somethings, but not all things)

- Can this be adopted gradually? Meaning a part is Go code, and a part generated from Lisette. Something like Haxe perhaps. This ties to issue 1 (3rd party interop)

But so far this is the BEST compile to Go language, and you are onto something. This might get big if the main issues are resolved.


Thanks for trying it out!

Variant qualification is a name resolution requirement - Lis follows Rust's scoping model where variants are namespaced under the enum. The implementation correctly infers the type, as shown e.g. in the hint `help: Use Shape.Circle to match this variant` My understanding is HM has nothing to say about this; it operates after names are resolved.

Re: Go third-party packages + incremental adoption, I'll do my best! Thanks for the encouragement.


To be fair, I feel like the language is widely criticized for this particular choice and it's not a pattern you tend to see with newer APIs.

It's a really valid FFI concern though! And I feel like superset languages like this live or die on their ability to be integrated smoothly side-by-side with the core language (F#, Scala, Kotlin, Typescript, Rescript)


To be honest you could easily mark this as an additional (adt) type if that suits you better. Its a halting situation no matter how you twist it.

I think you're assuming that LCDs all have framebuffers, but this is not the case. A basic/cheap LCD does not store the state of its pixels anywhere. It electrically refreshes them as the signal comes in, much like a CRT. The pixels are blocking light instead of emitting it, but they will still fade out if left unrefreshed for long. So, the simple answer is, you can't get direct access to something when it doesn't even exist in the first place.

I touched on the issues with FTP itself in another comment, but who can forget the issues with HTTP+FTP, like: modes (644 or 755? wait, what is a umask?), .htaccess, MIME mappings, "why isn't index.html working?", etc. Every place had a different httpd config and a different person you had to email to (hopefully) get it fixed.

There are multiple reasons why FTP by itself became obsolete. Some of them I can think of off the top of my head:

1) Passive mode. What is it and why do I need it? Well, you see, back in the old days, .... It took way too long for this critical "option" to become well supported and used by default.

2) Text mode. No, I don't want you to corrupt some of my files based on half-baked heuristics about what is and isn't a text file, and it doesn't make any sense to rewrite line endings anymore anyway.

3) Transport security. FTPS should have become the standard decades ago, but it still isn't to this day. If you want to actually transfer files using an FTP-like interface today, you use SFTP, which is a totally different protocol built on SSH.


Why would you say FTP is obsolete? For what it's worth, I still use it (for bulk file transfer).

chrome and firefox dropped support for it 5 years or so ago, it has had a lot of security issues over the years, was annoying over NAT, and there are better options for secure bulk transfers (sftp, rsync, etc)

I see, I assumed by ftp you also meant sftp.

Depending on your hardware (SBC), FTP can also be several times faster than SFTP for transferring files over a LAN. Though I'll admit to having used other protocols like torrents for large files that had bad transfers or other issues (low-quality connection issues causing dropped connections, etc).

TFTP is also a good choice for transferring files over trusted networks to/from underpowered devices.

There isn't much advantage that can be taken from O/S users and perms anyway, at least as far as git is concerned. When using a shared-filesystem repository over SSH (or NFS etc.), the actually usable access levels are: full, including the abilities to rewrite history, forge commits from other users, and corrupt/erase the repo; read-only; and none.

Git was build to be decentralized with everyone having its own copy. If it's an organization someone trusted will hold the key to the canonical version. If you need to discuss and review patches, you use a communication medium (email, forums, IRC, shared folder,...)

Git was built to be decentralized but it ended up basically displacing all other version control systems, including centralized ones. There are still some holdouts on SVN and even CVS, and there are niche professional fields where other systems are preferred due to tighter integration with the rest of their tools and/or better binary file support, but, for most people, Git is now synonymous with version control.

The Steam hardware survey currently has FMA support at 97%, which is the same level as F16C, BMI1/2, and AVX2. Personally, I would consider all of these extensions to be baseline now; the amount of hardware not supporting them is too small to be worth worrying about anymore.


So much software was stuck on Java 8 and for so long that some of the better GC algorithms got backported to it.


They do have uniform policies, those policies come from the aforementioned CA/Browser Forum, which has been issuing its Baseline Requirements for over a decade.


The edge connectors also look quite off, like two-thirds of the traces are missing, but that does take a discerning eye to notice.


Python chose, quite some time ago, not to follow C's lead on division: PEP 238 – Changing the Division Operator (2001) [1]

The rationale is basically that newcomers to Python should see the results that they would expect from grade school mathematics, not the results that an experienced programmer would expect from knowing C. While the PEP above doesn't touch on division by zero, it does point toward the objective being a cohesive, layman-friendly numeric system.

[1]: https://peps.python.org/pep-0238/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: