Hacker Newsnew | past | comments | ask | show | jobs | submit | tomxor's commentslogin

If hung SSH connections are common it's likely due to CGNAT which use aggressively low TCP timeouts. e.g. I've found all UK mobile carriers set their TCP timeout as low as 5 minutes. The "default" is supposed to be 2 hours, you could literally sleep your computer, zero packets, and an SSH connection would continue to work an hour later, and generally speaking this is still true unless CGNAT is in the way.

If you are interested there are a few ways you can fix this:

Easiest is to use a VPN, because the VPN's exit node becomes the effective NAT they usually have normal TCP timeouts due to being less resource constrained. Another nice benefit of this method is you can move between physical networks and your connection doesn't die... If you use Tailscale then you already have this in a more direct way.

Another is to tune the tcp_keepalive kernel parameters. Lowering the keepalive timeout to be less than the CGNAT timeout will cause keepalive probes to prevent CGNAT from dropping the connection even while your SSH connection is technically idle. For Linux I pop these into /etc/sysctl.d/z.conf, I have no idea for Windows or Mac:

  # Keepalive frequently to survive CGNAT
  net.ipv4.tcp_keepalive_time   = 240 
  net.ipv4.tcp_keepalive_intvl  = 60
  net.ipv4.tcp_keepalive_probes = 120
This is really a misuse of these settings, they are supposed to be for checking TCP connections are still alive and clearing them up from the local routing table. Instead the idea is to exploit the probes by sending them more frequently to force idle connections to stay alive in a CGNAT environment (dont worry the probes are tiny and still very infrequent).

_time=240 will send a probe after 4 mins of idle connection instead of the default 2 hours, undercutting the CGNAT timeout. _intvl=60 and _probes=120 mean it will send 120 probes 60 seconds apart (2 hours worth) before considering the connection dead. This will keep it alive for at least 2 hours, but also allows us to have the best of both worlds so that under a nice NAT it keeps the old behaviour, e.g if I temporarily lose my network the SSH connection is still valid after 2 hours, but under CGNAT it will at least not drop the connection after 5 mins so long as I keep my computer on and don't lose the network.

There are also some SSH client keepalive settings but I'm less familiar with them.


> you could literally sleep your computer,

Depends on whether your sockets survive that, though. Especially on Wi-Fi, many implementations will reset your interface when sleeping, and sockets usually don't survive that.

Even if they do, if the remote side has heartbeats/keepalive enabled (at the TCP or SSH level), your connection might be torn down from the server side.


Yes, by generally I really mean all the defaults are pretty permissive, but I understand some people tune both TCP and SSH on their servers to drop connections faster because they are worried about resource exhaustion.

But if you throw up a default Linux install for your SSH box and have a not-horrible wifi router with a not-horrible internet provider then IME you can sleep your machine and keep an SSH connection alive for quite some time... I appreciate that might be too many "not-horrible" requirements for the real world today though.


Not on a Mac

    Host *
        ServerAliveInterval 25

Yes, this makes your connection more likely not survive client suspends. (ClientAliveInterval, which makes the server ping the client, will make it fail almost certainly, since the server will be active while the client is sleeping.)

Check Mosh. It supports these kind of cuts and it will reconnect seamlessly. It will use far less bandwidth too. I successfully tried it with a 2.7 KBPS connection.

Note this is only an issue if not using IPv6.

CGNAT is for access to legacy IPv4 only.


Well, for different reasons, but you have similar issues with IPv6 as well. If your client uses temporary addresses (most likely since they're enabled by default on most OS), OpenSSH will pick one of them over the stable address and when they're rotated the connection breaks.

For some reason, OpenSSH devs refuse to fix this issue, so I have to patch it myself:

    --- a/sshconnect.c
    +++ b/sshconnect.c
    @@ -26,6 +26,7 @@
     #include <net/if.h>
     #include <netinet/in.h>
     #include <arpa/inet.h>
    +#include <linux/ipv6.h>
     
     #include <ctype.h>
     #include <errno.h>
    @@ -370,6 +371,11 @@ ssh_create_socket(struct addrinfo *ai)
      if (options.ip_qos_interactive != INT_MAX)
        set_sock_tos(sock, options.ip_qos_interactive);
     
    + if (ai->ai_family == AF_INET6 && options.bind_address == NULL) {
    +  int val = IPV6_PREFER_SRC_PUBLIC;
    +  setsockopt(sock, IPPROTO_IPV6, IPV6_ADDR_PREFERENCES, &val, sizeof(val));
    + }
    +
      /* Bind the socket to an alternative local IP address */
      if (options.bind_address == NULL && options.bind_interface == NULL)
        return sock;

The temporary address doesn't stay active while there's a connection on it? I think that would be the actual "fix".

I think it does, but that's not the issue: if the interface goes down all the temporary address are gone for good, not just "expired".

If you're on a stable address, and the interface goes down, will it let your connection/socket continue to exist?

Because if the connection/socket gets lost either way, I don't really care if the IP changes too.


I'm not sure what happens to the socket, maybe it's closed and reopened, but with this patch I have SSH sessions lasting for days with no issues. Without it, even roaming between two access points can break the session.

Interesting! Is there anywhere a discussion around their refusal to include your fix?

See this, for example: https://groups.google.com/g/opensshunixdev/c/FVv_bK16ADM/m/R...

It boilds down to using a Linux-specific API, though it's really BSD that is lacking support for a standard (RFC 5014).


It would also seem to break address privacy (usually not much of a concern if you authenticate yourself via SSH anyway, but still, it leaks your Ethernet or Wi-Fi interface's MAC address in many older setups).

Well, yss, but SSH is hardly ever anonymous and this could simply be a cli option.

Not anonymous, but it's pretty unexpected for different servers with potentially different identities for each to learn your MAC address (if you're using the default EUI-64 method for SLAAC).

This is a good argument for not making it the default, but it would be nice to have it as a command line switch.

This is a very common misconception. The issue is not IPv4 or CGNAT, it's stateful middleboxes... of which IPv6 has plenty.

The largest IPv6 deployments in the world are mobile carriers, which are full of stateful firewalls, DPI, and mid-path translation. The difference is that when connections drop it gets blamed on the wireless rather than the network infrastructure.

Also, fun fact: net.ipv4.tcp_keepalive_* applies to IPv6 too. The "ipv4" is just a naming artifact.


Mobile carriers usually have stateful firewalls for IPv6 as well (otherwise you can get a lot of random noise on the air interface, draining both your battery and data plan), so it's an issue just the same.

The constrained resource there is only firewall-side memory, though, as opposed to that plus (IP, port) tuples for CG-NAT.


> otherwise you can get a lot of random noise on the air interface, draining both your battery and data plan

I highly doubt you get "random" data over ipv6. There are more ipv6 addresses than there are atoms on the planet.


Yes, but they're not randomly distributed across the entire number space.

For example, receiving traffic from a given address is a pretty good indicator that there's somebody there possibly worth port scanning.

And where there has once been somebody, there or in the same neighborhood (subnet) might be somebody else, now or in the future.


Then it isn't random noise. It is determined by your own actions.

Or my predecessor/address space neighbor, or that of somebody using my wireless hotspot once, or that of me clicking a random link once and connecting to 671 affiliated advertisers's analytics servers...

I think a default policy of "no inbound connections" does makes sense for most mobile users. It should obviously be configurable.


putty is sending packets for network up since like forever

> It’s who’s looking at your profile; it’s the profiles that you’re looking at. That was the holy grail

Facebook actually implemented this as a user facing feature.

I think it was very early days, but I used it, it was fucking creepy, and everyone hated it. I think Facebook probably removed it because it drove people away. It made you feel like a creep for checking on your friends page.


Yup, in a word, ownership.

But that's an unpopular approach these days where many companies are obsessed with minimising the bus factor to the point that their IP is as replaceable as their employees.


Companies hate ownership because it means they need to hire competent people who are more expensive

These days its turned less into bus factor and more into got-fed-up-of-wage-compression-and-left factor


That only works in world regions where being a developer isn't seen culturally as yet another office job, and there is actually a job market that allows jumping easily across companies.

Even when one is lucky to find such companies, it only lasts until the next round of layoffs or management re-structuring, been there a few times.

Also, budgets. I have been vocal about being on-call, no problem for me as long as there is extra compensation (in our org there is).

No takers yet.


> I cannot think of a technology more diametric to 'plug n play' than VR, which is very unfortunate.

Ironically that's exactly what the Quest solved with SLAM, it really is plug and play, otherwise I would not have bought one... and it sucks that Meta now owns it, but it really is still the best "just works" VR.

I also don't think VR has much potential to solve real world problems for enough people, but it doesn't have to because it's pretty good entertainment as a gaming device (albeit still fairly niche).


Not Maybe, I owned a 2009 MBP. Everyone with a macbook from that period that I knew had the same issue, they were absurdly bright, you could not keep it anywhere near a bedroom without putting very thick tape over the light.

It was a poorly thought out design of aesthetics over ergonomics.


nope. actually I remember I had that model first and yes I still don't care. simply the least annoying light compared to other bright color leds in a room. doesn't stand close to liquid glass chaos.

loved battery level indicators on old macbooks too, they kind of brought it back with led on magsafe except this new led is more annoying.


> I started making deliberate grammar and spelling mistakes in professional context.

I've also noticed an increase of this in myself and others, I used to edit a lot more before sending anything, but now it seems more authentic if you just hit send so it's more off the cuff with typos, broken sentences and all.

I'm sure an LLM could easily mimic this but it's not their default.


> the number of bus stops might matter at the margins, we’re not talking about a system where marginal improvements will matter

The central argument of reducing stops is increasing bus speed, not reducing margins, It's in the second paragraph.

[edit]

Top comment is a straw man, attempt to correct course downvoted... I'm not sure how much value HN has left for useful discourse, who the fuck are you people, if you even are people.


You're being downvoted because you misunderstood the post you're replying to. They aren't referring to profit margins, but marginal utility—i.e. incremental improvements to stop spacing (purportedly) would not be enough to fix a fundamentally broken system.


I have the same fears. Last year they have publicly stated they are not interested in acquisition [0]

> Pennarun confirmed the company had been approached by potential acquirers, but told BetaKit that the company intends to grow as a private company and work towards an initial public offering (IPO).

> “Tailscale intends to remain independent and we are on a likely IPO track, although any IPO is several years out,” Pennarun said. “Meanwhile, we have an extremely efficient business model, rapid revenue acceleration, and a long runway that allows us to become profitable when needed, which means we can weather all kinds of economic storms.”

Nothing is set in stone, after all it's VC backed. I have a strong aversion to becoming dependent upon proprietary services, however i have chosen to integrate TS into my infrastructure, because the value and simplicity it provides is worth it. I considered the various copy cat services and pure FOSS clones, but TS are the ones who started this space and are the ones continuously innovating in it, I'm onboard with their ethos and mission and have made use of apenwarrs previous work - In other words, they are the experts, they appear to be pretty dedicated to this space, so I'm putting my trust in them... I hope I'm right!

[0] https://betakit.com/corporate-vpn-startup-tailscale-secures-...


Would be curious if a partial decompilation and short static analysis would yield any reliable info about what they might be collecting.


Just note i doubt Tailscale were first popular vpn manager as i remember many hobby users are Zerotier converts and also much older products like Hamachi.

Tailscale have build great product around wireguard (which is quite young) and they have great marketing and docs. But they are hardly first VPN service - they might not even be the most popular one.


Yes, I ambiguously said "started this space"... and to be honest even in the most generous interpretation that's probably incorrect, maybe ZeroTier started "this space", in that it had NAT busting mesh networking first.

As far as I understand Tailscale brought NAT busting mesh networking to wireguard + identity first access control, and reduced configuration complexity. I think they were the first to think about it from an end to end user perspective, and each feature they add definitely has this spin on it. It makes it feel effortless and transparent (in both the networking use sense and cryptography sense)... So i suppose that's what I mean by started, TS was when it first really clicked for a larger group of people, it felt right.


Might be time to learn me some Wireguard.


How about inverting the issue, highlight posts with an opt in label. e.g

  Show HN [NOAI]:
Since it's too controversial to ban LLM posts, and would be too easy for submitters to omit an [LLM] label... Having an opt in [NOAI] label allows people to highlight their posts, and LLM posts would be easy to flag to disincentivise polluting the label.

This wouldn't necessarily need to be a technical change, just an intuitive agreement that posts containing LLM or vibe coded content are not allowed to lie by using the tag, or will be flagged... Then again it could also be used to elevate their rank above other show HN content to give us humanoids some edge if deemed necessary, or a segregated [NOAI] page.

[edit]

The label might need more thought, although "NOAI" is short and intelligible, it might be seen as a bit ironic to have to add a tag containing "AI" into your title. [HUMAN]?


I'm 90% sure this will end with endless squabbles who's right that the label is correct/incorrect, rather than actual conversations about what the project that the person is showing. It already happens without the labels, feels like it'd increase the frequency of that even more if this label gets enforced.


Is the problem that the app was written with AI assistance or that it's low-effort/bad? I don't care if you used Claude to fix a bug or something if you have a cool app, but i do care if you vibe coded something I could've vibe coded in an hour. That's boring.

Feels like effort needs to be the barrier (which unfortunately needs human review), not "AI or not". In lieu of that, 100 karma or account minimum age to post something as Show HN might be a dumb way to do it (to give you enough time to have read other people's so you understand the vibe).


A core part of the HN ethos is avoiding siloing dynamics, which is exactly what [NOAI] would be.


This study is measuring the wrong thing. Any diet that restricts calories will cause weight loss, that's just physics not biology. So long as the person strictly sticks to that diet it will work.

Strategies like intermittent fasting or diets that moderate what you eat rather than quantity are focused on the later aspect "strictly sticking to that diet". Because being strict is not sustainable, will power is limited and inconsistent, so wasting it on strategies that are hard to stick to is both futile and a waste of will power. Changing what and when you eat accounts for biology instead of just physics, because those variables have a huge impact on satiety.

The study has a minimum interval of 4 weeks, which does not take much will power. Not to mention the psychological impact of being part of a study.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: