there is middleground: tax / fines, whatever you name them. It will be free if you filled the paperwork, and it start out cheap, while gradually increase yearly. Can be different depending on the density or how heavy traffic an area is. However you should improve the public transport at the same time too.
there are parties that don't want that cooldown, libraries or software writers. XZ utils backdoor are found by Microsoft and Postgresql developer Andres Freund due to high CPU usage (or latency? CMIIW) during SSH tests, those are the people who will keep the same workflow.
Well both of them are easily retrieved from web search, it's not a problem if you forget one or two. I'll probably need some refreshment if I want to implement bubble sort again.
In my 7th years of professionally programming node, not even once I remember the express or html boilerplate, neither is the router definition or middleware. Yet I can code normally provided there's internet accessible. It's simply not worth remembering, logic and architecture worth more IMO
The problem is that lazy people use the supposed Einstein quote as a convenient excuse to not know and internalize knowledge about their own profession. You can bet that Einstein memorized the relevant mathematics for his work thoroughly and completely.
Oh didn't you know - the legend has it, he was also bad at maths in elementary school, so that kid of your friends, failing the math this year, may be a genius too :)
> I don't think there is that much value in memorizing rarely used, easily looked up information.
easily looked up - we don't have that any more since Google decided to entshittify search. What you now have is not looked up information, as that would look the same each time you "looked it up" - instead of a quick 3-4 word search pattern you now write an elaborate verbose "query", and get a chewed up re-interpretation by the shitty LLMs. And then since sometimes its not quite what you asked for, you have to ask it again or redirect it and just like that, you've wasted 5 mins of your time arguing with a goddamn neural network!
This is something I've been curious about for a long time now. I would happily pay their top tier, but testing their free tier did not seem to produce much different results compared to using google ? Or was I not using it right? Just typing in same stuff and literally getting the same list of results. If you want to share more, I'd be happy to know.
"testing their free tier did not seem to produce much different results compared to using google"
So Kagi gave you ads, sponsored links, AI generated answers etc. as top results? =)
It shouldn't, or you've registered for a scam site. That's the difference. With Kagi it feels like I'm using Google from 10 years ago. On Kagi I can go "searchword -notthisone +thshastobethere inurl:forum" and it actually works.
You can also manually downrank sites in the settings so that they never appear in your results (as I've done for pinterest etc that are 99% crap, but have excellent SEO). Or boost sites with reliably good results.
> So Kagi gave you ads, sponsored links, AI generated answers etc. as top results? =)
No, I did not register for kaggi.search if that's what you are implying.
This was about ~12 months ago, so no, the AI-generated summaries were not a thing you would expect to see at that time, outside of ocassional A/B experiments. Now maybe my expectation is different to yours. I'd expect I dont have to do much tweaking or downranking at all. My pre-2019 google experience had been roughly "i type in 3-4 words" and one of the first three links is exactly what I searched for. Kagi did not deliver that, much as I would love it to. But with Kagi relying mainly on google search index, it seems to me there is only so much they can do anyway, apart from users ranking and downranking stuff...which I am not keen to do...
Actually Kagi uses Bing and Yandex as the backend, not google.
It also tends to surface niche sites more than Google for me.
It also takes maybe 30 seconds to downrank a site and you'll never see it again, unlike Google that will keep giving you shitty "review" sites, Pinterest etc as results.
I know about Yandex ... but I am fairly sure I read the bulk of its search is based on googles index. I know downranking is not much work, but I just dislike the idea of having to work for it.
Agreed, it interests me how much some people emphasise knowing facts - like dates in history or dictionary definitions of words.
Facts alone are like pebbles on a beach, far better (IMO) to have a few stones mortared with understanding to make a building of knowledge. A fanciful metaphor but you know ...
Knowing facts matters quite a lot imo, even if it doesnt 'seem' like it.
To use another metaphor, you can't REALLY see the forest amongst the trees, if you don't consider the trees themselves.
One of the reasons I like history so much is because, with enough facts accumulated, you can see how one piece of information flows into another - e.g. dates matter, because knowing the precise order in which important events occur helps you determine how those events may or may not have affected each other in the course of their unfolding.
Sure memorizing dates is boring on its own, but putting them in contexts is exciting - you still need to comb the beaches to find the right stones!
I accept the ordering of dates is important, yes. History can be in the details, but as you say you need to comb the beach for the right stones.
I guess an interesting counterpoint to what I said is something like https://en.wikipedia.org/wiki/Phantom_time_conspiracy_theory (and similar) where a grandiose framework tries to fit inconvenient facts into a shape that is entirely invented.
This is an entirely false dichotomy though, is it not? One can both know facts and understand logic behind them, it's not like you're creating an RPG character and need to make a choice with limited character points.
(Can't say time is the limiting factor either -- we're both in HN comments, valuing our own time at zero.)
I'm not an expert, however what I believe is brain has limited capacity, and old memories keep being deleted when unused after long time. It is impossible to remember everything unless you have photographic memory. It makes remembering facts like syntaxes challenging and most of the time useless, and keeping logic is better in the long run.
Let's for example about html boilerplate, where you don't remember the syntax. What you remember is the components & why they are needed, then add them one by one as you recall your memory. Doctype, html tag, head, body, etc. It works because html is simple and common.
Then for express it is harder, because you need to recall javascript syntaxes and express syntaxes, and most of the time you don't get involved with express outside req and res. You recall that express need body parser, register routers, and finally listen, whether you use http server first or directly from express. Now you compose one by one, looking at docs or web for the forgotten pieces, but you don't lose the understanding / logic of express, you just forget the syntaxes.
As for stream where I keep forgetting it, I just need to remember that stream need source, event handler such as on data, error, finish / end. Pipe if needed. However I never remember whether to use writable, readable, streamable, etc because I seldom get involved with them, and can look up for references anytime.
And it ignores the fact that, if you refuse to remember any facts because they can be looked up, you'll be unable to form any new ideas because you'll know nothing, and you won't know what is out there to be looked up.
Yes I was not clear, it seems. Facts are necessary but not sufficient.
There is limited time, of course - no one can learn everything, but you can pay attention to the important facts, and the connections between them.
In some ideal world you would learn every fact there is, and the connections would fall out on their own, but in the real world we have to construct theories and frameworks to organise facts.
I remember the "HTML boilerplate", because I don't see it as boilerplate. I don't memorize it, I reconstruct it from the base concepts.
> Yet I can code normally provided there's internet accessible.
I'm the opposite. Yes I need my computer to test things fully, but I'm able to code on paper. I want my computer to be a complete sufficient node, so I mostly install documentation and my computer is mostly not connected to the internet, unless I actively enable it to do a specific thing.
yep, having some slack is the only way for someone / something to able to respond to uncertainty. technically having firefighter on standby and policemen on patrol are a form of slacking, and we (should) have no problem with that.
It required a lot of manual work and for large apps like Minecraft it took teams of people to figure out what the symbol names should be slowly contributing a little bit every day.
this is actually a good example of how a more detailed issue will have a higher chance to be addressed. I don't know what information that's your previous report is lacking, but the video certainly give more information that the maintainer can pinpoint the cause and act on it. The ability to pinpoint the cause from the report is a godsent for maintainers, it drastically reduce the time to investigate the cause, thus able to act immediately.
Some of the information in this can may be:
* how "slow" exactly the process is related with normal behavior. If it's just said "slow" on previous report, it's easy to be dismissed
* the dispenser's behavior, such as if the water flow is consistently low volume or clogged intermittently, or if the dispenser is struggling to fetch from water source, etc
I'd say it was both. I gave a pretty detailed explanation before, far more detailed than my post here, including a timeline of when it filled in one shot, then two shots, and then three or four (can't remember). I doubt they actually checked before the video. But I was very motivated to fix the issue so I gave them proof lol
More importantly it shows how the reporter actually used the system to trigger the undesired behavior. Just because something is obvious to you doesn't mean it will be obvious to whoever is looking at the bug report.
well having no e2e encryption is safer than having a half-baked e2e encryption that have backdoor and can be decrypted by the provider.
and for tiktok's stance, I think they just don't want to get involved with the Chinese government related with encryption (and give false sense of privacy to user)
It depends on what you're handling. Frontend (not css), swagger, mundane CRUD is where it shines. Something more complex that need a bit harder calculation usually make the agents struggling.
Especially good to navigate the code if you're unfamiliar with it (the code). If you have known the code for good, you'll find it's usually faster to debug and code by yourself.
Well someone who says logging is easy never knows the difficulty of deciding "what" to log. And audit log is different beast altogether than normal logging
Audit logging is different because it's actually more straightforward than "normal logging". You just make a log entry for each state change, basically. Especially if you're storing the log entries as "objects" instead of plain text.
Besides, do you think that a LLM would be better at deciding what to log than a human that has even just a little experience with the actual system in question?
reply