> By this logic, the Nazis were the good guys in WWII, and Israel would be the good guys if they'd just turn off all their pesky air defenses.
Can you elaborate on this? I thought that the Nazis were pretty obviously the "bad guys" due to committing genocide and mass casualties (combatant and civilian) while trying to expand their borders.
> It doesn't make any sense to try to judge morality based on casualty ratios.
Really, even the ratio of civilian casualties, or ratio of civilian casualties to combatant casualties? Those seem pretty relevant to morality in my book, but I might be misunderstanding.
I think we're mostly in agreement? I agree civilian casualty ratios can be meaningful signals about morality, provided that we account for context (e.g. whether civilians are trapped in a warzone or able to evacuate) and are careful to draw apples-to-apples comparisons.
But the parent wasn't really comparing these ratios; it was closer to a "total deaths on either side" sort of comparison. Usually the implied message is that in a conflict between two sides, the side that killed more must be less moral. That dubious logic would suggest e.g.
- The Nazis were morally superior to Western Allies, since the Western Allies killed more Germans than the reverse.
- The Coalition was extremely evil in the Gulf War, since Iraq suffered several orders of magnitude more casualties.
- Israel is bad partly because it goes to extreme lengths to protect its people (Iron Dome, bomb shelters everywhere, etc.). Letting more of its people get killed would "even out the scales" and suddenly make Israel's military operations more moral.
>>Usually the implied message is that in a conflict between two sides, the side that killed more must be less moral.
And you decided that this is an argument I'm making and decided to argue against that, instead of what I'm actually saying - which sure, would lead to the nonsensical logical conclusions that you wrote.
What makes Israel a state worthy of condemnation is the fact that they target civilians on purpose. That they shoot at medics, deny food supplies, shoot rockets at refugee camps, hospitals, schools, they shoot at little kids playing around, they torture their prisoners, they use AI to guess which person needs to be eliminated and they blow them up with their families to maximise casualties - and all of the above happens without any oversight or consequence for any people involved. The 20k children dead is a consequence of all of these decisions, the number itself isn't what makes Israel bad - it's how they got to it, through a culmination of decades of decisions on how they see Palestinians - as subhuman scum needs to die. There is no effort to protect civilian life, and IDF saying otherwise is just lying.
But I feel like you're keen to say that Israel is "defending" itself and Gaza is a narrow urban zone, so of course it can't be done any other way.
Let me maybe ask you this, just to satisfy my own curiosity more than anything - if Israel decided to kill everyone in Gaza, based on the assumption that since Hamas doesn't wear uniforms anyone can be a militant so this is justified, would you just go "yeah that's fair"? Or would you just make some argument about how no army in the world would do better.
> And you decided that this is an argument I'm making and decided to argue against that
Then what was the point of your numeric comparison? If you agree it's a very poor signal about morality, why bring it up?
> What makes Israel a state worthy of condemnation [...]
It seems like you're just listing every random accusation you've heard that paints Israel in a bad light. Should we try this game with another country, like say Palestine?
> the assumption that since Hamas doesn't wear uniforms anyone can be a militant so this is justified
>>It seems like you're just listing every random accusation you've heard that paints Israel in a bad light
I really don't understand your train of thought. Are you saying these things didn't happen? Or they did happen but Palestine also is doing despicable things so they don't matter? Or they do matter but they aren't worth being upset about? Or it's worth being upset about them, but they shouldn't be discussed?
>>No I certainly don't think that.
Well what did you bring it up as the first point then? I said - hey I'm bothered by the fact that Israel killed 20k children in this conflict, and then you said hey I wish someone was talking more about the fact that hamas doesn't wear uniforms when fighting. Like, what is the conclusion here? That Israel is killing civilians because anyone can be a militant(since hamas militants don't wear uniforms), or.......what is the alternative?
>> If you agree it's a very poor signal about morality, why bring it up?
I don't agree with that - I just said it's a consequence of every other choice that Israel made up to this point.
I just don't see the point of engaging with a big laundry list of random accusations against Israel. Some are likely true. Urban wars aren't rainbows and butterflies, and no military is perfect. Ukraine has had a bunch of incidents with soldiers abusing and even executing POWs, should we sanction them too? US recently obliterated a girls' school, should we sanction ourselves for our mistake?
> what is the conclusion here?
Maybe something like "Israel's neighbors should probably stop attacking it", "Hamas should put on uniforms", or "countries that supposedly care about Gazans' well-being should accept war refugees"?
If your takeaway is that it's all Israel's fault, but you can't name any other military that does a better job of dealing with terrorists who embed themselves among civilians, that seems like the wrong takeaway.
The trust component is so critical here. When I get halfway through reading a design doc and hit a part that's obviously slop, it really hurts my confidence in the project and in any faith in the developer having done their due diligence.
Certain communications, especially technical writing, are "expensive" both in terms of the effort of the author(s), and in terms of the person-hours of people reading them to gain understanding. Like mission-critical code, they should be written and reviewed with care, and at the very least heavily edited from an automated LLM output to be unrecognizable as such.
I personally don't use LLMs at all in my designs and I remain skeptical of the value proposition for those who do.
> This was not opportunistic. It was precision. The malicious dependency was staged 18 hours in advance.
Another obvious ChatGPT-ism. The fact that people are using AI to write these security posts doesn't surprise me, but the fact they use it to write a verbose article with spicy little snippets that LLMs seem to prefer does make it really hard to appreciate anything other than the simple facts in the article.
I think this is a political and economic problem rather than a technological one.
I cannot think of a more important skill than surgery to continue training humans to do and to be wary of AI robotics replacing. Sure, some surgeries could likely be automated, but the entire point of specialist surgeons is to make choices and act in a timely manner in ambiguous situations with extremely high stakes.
What happens when the robot messes up? What happens when the internet is down, or the hospital is operating under abnormal circumstances? How do you teach, train, and collaborate with human medical workers and caregivers in a world where surgeons have been replaced by robots?
Most of the excess costs for healthcare and surgery aren't the humans doing the work. I think there's a lot of other areas we can optimize first, chief among those in healthcare being the cost structure around private businesses and insurers bloating the bill with administrative costs. There's a reason every other developed nation has a single-payer healthcare system and better outcomes, and I don't think an AI breakthrough is the only plausible solution to improving costs in the US. In fact, under the current system, an AI breakthrough in medicine would likely hurt the workforce more than it would improve costs.
This is an angle for people who default to AI-edited written speech that I've tried to be more empathetic to. I think it depends on your audience, but in professional writing that isn't published publicly (i.e. communication with your colleagues, design docs, etc.), or even the "rough draft" form of something that will be published, I think starting with your own words comes across as way more authentic.
I've seen enough GPT-generated slop that I find its style of writing very off-putting, and find it hurts the perceived competence or effort of the author when applied in the wrong context. I'm not sure if direct translation tools serve a better purpose here, but along with the other commenters, I personally find imperfect speech that was actually written "by hand" by the author easier and more straightforward to communicate with despite the imperfections. Also, non-ESL speakers make plenty of mistakes with grammar, spelling, etc. that humans are used to associating with "style" as authentic speech.
It can also become a crutch for language learners of any age / regardless of their primary language, that inhibits learning or finding one's own "style" of speech
I've been meaning to get off Gmail, and Proton Mail does seem like my favorite of the alternatives from a quick glance, but I'm also concerned about privacy focused services like Proton getting blocked or compromised in the US... This was a pretty good read
Also,
> I do my best to boycott bad things. And I fail pretty often. I still use Amazon on occasion and I can’t get off Spotify. I use Uber and DoorDash a lot more than I’d like. And I have too many Apple products/services.
OK, I can intuit why most of those are bad, but can somebody give me a good-faith interpretation on what's bad about Apple?
I'd assume it's the working conditions and material extraction processes in China, parts of Africa, and elsewhere, but isn't that true of every piece of consumer technology? The only better companies for consumer hardware that come to mind are Framework and Google for recycling parts and raw materials, but the whole point of the article is about de-googling and Framework's products are relatively niche and at a much lower price and performance / market category.
Apple is very anti-consumer, locking devices down, using planned obsolescence, fighting hard against movements for more open and fairer market practices and standards (e.g. switching to the standard USB-C port, allowing third-party app stores, exploiting developers releasing software on their platforms).
Or as I tell my Apple-philic tech friends: your devices are a single year of flat revenue away from user-hostile decisions.
At the end of the day, Apple exists to make money and keep shareholders happy.
If the business stops growing organically, do you really think they're going to benevolently use the massive control they have over their own platform?
"Apple" isn't privacy-focused. Their marketing strategy is currently privacy-focused and their economics currently permit them to be.
I think many used to feel that Google was the standout ethical player in big tech, much like we currently view Anthropic in the AI space. I also hope Anthropic does a better job, but seeing how quickly Google folded on their ethics after having strong commitments to using AI for weapons and surveillance [1], I do not have a lot of hope, particularly with the current geopolitical situation the US is in. Corporations tend to support authoritarian regimes during weak economies, because authoritarianism can be really great for profits in the short term [2].
Edit: the true "test" will really be can Anthropic maintain their AI lead _while_ holding to ethical restrictions on its usage. If Google and OpenAI can surpass them or stay closely behind without the same ethical restrictions, the outcome for humanity will still be very bad. Employees at these places can also vote with their feet and it does seem like a lot of folks want to work at Anthropic over the alternatives.
> I randomly tried Android again for a few months last spring. Using a functioning keyboard was revelatory. But I came crawling back to iOS because I'm weak and the orange iPhone was pretty and the Pixel 10 was boring and I caved to the blue bubble pressure.
I know this is somewhat a joke site, but I think admitting this really proves Apple's dominance and doesn't really help in making your case. So long as the walled garden / "platform" approach still works, enshittification will continue
I feel like this anecdote represents the differing incentives / philosophies of each group rather well.
I've noticed ChatGPT is rather high in its praise regardless of how valuable the input is, Gemini is less placating but still largely influenced by the perspective of the prompter, and Claude feels the most "honest" but humans are rather easy poor at judging this sort of thing.
Does anyone know if "sycophancy" has documented benchmarks the models are compared against? Maybe it's subjective and hard to measure, but given the issues with GPT 4o, this seems like a good thing to measure model to model to compare individual companies' changes as well as compare across companies.
Naive question, but isn't this relatively safe information to expose for this level of attack? I guess the idea is to find systems vulnerable to 0-day exploits and similar based on this info? Still, that seems like a lot of effort just to get this data.
it's not "just to get that data", it's to confirm level of access, check for potential other exploiters or security software, identify the machine you have access to, identify what the machine has network connectivity to, etc. The attacker then maintains the c2 channel and can then perform their actual objective with the help of the data they have obtained.
Can you elaborate on this? I thought that the Nazis were pretty obviously the "bad guys" due to committing genocide and mass casualties (combatant and civilian) while trying to expand their borders.
> It doesn't make any sense to try to judge morality based on casualty ratios.
Really, even the ratio of civilian casualties, or ratio of civilian casualties to combatant casualties? Those seem pretty relevant to morality in my book, but I might be misunderstanding.
reply