Hacker Newsnew | past | comments | ask | show | jobs | submit | zyxzevn's commentslogin

While skeptical, he did not have much skepticism against mainstream theories.

I think it needs another item in the list: For any theory/ hypothesis: how well does it stand against the null-hypothesis? For example: How much physical evidence is there really for the string-theory?

And I would upgrade this one: If there’s a chain of physical evidence (was argument), every link in the chain must work (including the premise) — not just most of them

And when breaking these items do not mean that something is false. It means that the arguments and evidence is incomplete. Don't jump to conclusions when you think that the arguments or evidence is invalid (that is how some people even think that the moonlanding was a hoax).


> While skeptical, he did not have much skepticism against mainstream theories.

That's tautological. The definition of a "mainstream theory" is one that is widely believed. And while, sure, sometimes scientific paradigms are wrong (c.f. Kuhn), that's rare. Demanding someone be "skeptical" of theories that end up wrong is isomorphic to demanding that they be a preternatural genius in all things able to see through mistakes that all the world's experts cannot. That doesn't work.

(It's 100% not enough just to apply a null hypothesis argument, btw!)

Really that's all of a piece with his argument. It's not a recipe for detecting truth (he didn't have one, and neither do you[1]). It's a recipe for detecting when arguments are unsupported by scientific consensus. That's not the same thing, but it's closer than other stuff like "trust".

(And it's 100% better th an applying a null-hypothesis argument, to be clear.)

[1] Well, we do, but it's called "the scientific method" and it's really, really hard. Not something to deploy in an internet argument.


> And I would upgrade this one: If there’s a chain of physical evidence (was argument), every link in the chain must work (including the premise) — not just most of them

We still use Newtonian physics plenty, despite bits of it not working due to relativity.


>We still use Newtonian physics plenty, despite bits of it not working due to relativity.

Absolutely. And since no one else has trotted this bit out yet, I guess it will be me:

All of our science is based on imperfect models of how the universe works. Every single one is wrong.

However, the models we use today are less wrong than those we used in the past. We know this because we (as you pointed out about Newtonian physics) can more accurately describe the universe than we were able to do previously.

That doesn't mean we've found "the truth." Nor does it mean that we have all the answers.

This is an important concept for baloney detection as those who are peddling baloney will (often, but not always) purport to know "the truth."

Anyone who makes such claims is either knowingly attempting to mislead or is lying to themselves and others.

Which, IMHO, is a pretty big red flag in baloney detection.


> For any theory/ hypothesis: how well does it stand against the null-hypothesis? For example: How much physical evidence is there really for the string-theory?

That's an unfortunate choice of example - the problem with string theory is that there is no null hypothesis. We know that our other theories are not self-consistent when unified, but we don't have a theory that is self-consistent, that could serve as the null hypothesis.


> And I would upgrade this one: If there’s a chain of physical evidence (was argument), every link in the chain must work (including the premise) — not just most of them

From The Demon Haunted World:

"In the middle 1970s an astronomer I admire put together a modest manifesto called “Objections to Astrology” and asked me to endorse it. I struggled with his wording, and in the end found myself unable to sign—not because I thought astrology has any validity whatever, but because I felt (and still feel) that the tone of the statement was authoritarian. It criticized astrology for having origins shrouded in superstition. But this is true as well for religion, chemistry, medicine, and astronomy, to mention only four. The issue is not what faltering and rudimentary knowledge astrology came from, but what is its present validity.

...

The statement stressed that we can think of no mechanism by which astrology could work. This is certainly a relevant point but by itself it’s unconvincing. No mechanism was known for continental drift (now subsumed in plate tectonics) when it was proposed by Alfred Wegener in the first quarter of the twentieth century to explain a range of puzzling data in geology and paleontology. (Ore-bearing veins of rocks and fossils seemed to run continuously from Eastern South America to West Africa; were the two continents once touching and the Atlantic Ocean new to our planet?) The notion was roundly dismissed by all the great geophysicists, who were certain that continents were fixed, not floating on anything, and therefore unable to “drift.” Instead, the key twentieth-century idea in geophysics turns out to be plate tectonics; we now understand that continental plates do indeed float and “drift” (or better, are carried by a kind of conveyor belt driven by the great heat engine of the Earth’s interior), and all those great geophysicists were simply wrong. Objections to pseudoscience on the grounds of unavailable mechanism can be mistaken—although if the contentions violate well-established laws of physics, such objections of course carry great weight."


"The oil must flow"


what about a smart lamp?


The older construction is also very easy to distinguish from the Inca construction. And the Inca themselves know this history in their community. Brien Foerster has a lot about the Inca culture. https://www.youtube.com/@brienfoerster/search?query=inca

The older construction is made of very big stones of hard granite, that fit perfectly together. Assuming they had some concrete, it is easy how they were able to make them fit so perfectly. If you have a source of materials, concrete is not difficult to make. See https://www.geopolymer.org/

People were not stupid, and technologies were invented and forgotten. And just like Roman technologies were lost in the middle ages, this building technology was lost to the Incas.

The Incas build their houses and temples on top of the existing ones. They used smaller stones that did not fit well together. Still a great culture, but with different technologies.

South America has a lot of cultures that disappeared. They had no written history and a lot of stuff was destroyed by later cultures (including the Spanish). So it is impossible for historians to get it right.

For example there were also people with elongated skulls and red hair in Peru. Could be a result of inbreeding as they also had some other physiological differences. Maybe exterminated by another tribe. https://www.youtube.com/watch?v=5dfpLN3FbQs

History is often full with conflicts, but presented as if it is all known. There are often conflicts with engineers who point out different technologies used for buildings and such. These technologies do not fit in the simplified timeline of mainstream history.

This difference in technology is obvious regarding the extremely accurate Egyptian granite vases https://www.youtube.com/watch?v=7BlmFKSGBzI and granite boxes.


I used "Open Recursion" in many large (ObjectPascal / C++) projects. With simple interfaces, a large project becomes a collection of smaller components. I noticed many programmers do not understand it. Pure OOP languages (like Smalltalk or Ruby or Scala) are the best languages to understand how it could work. They usually have closures where other languages would have "patterns".

The problem is that the components are often connected to different interfaces/graphs. Components can never be fully separated due to debug, visualization and storage requirements.

In non-OOP systems the interfaces are closed or absent, so you get huge debug, visualization and storage functions that do everything. On addition to the other functionality. And these functions need to be updated for each different type of data. The complexity moves to a different part. But most importantly, any new type requires changes to many functions. This affects a team and well tested code. If your product is used by different companies with different requirements (different data types), your functions become overly complex.


3Blue1Brown has a very good explanation of how light works as a wave And the barber pole effect shows how matter (sugar) rotates light https://www.youtube.com/watch?v=QCX62YJCmGk

There is also evidence that "photons" are just thresholds in the material that is used to detect light. The atoms vibrate with the EM-wave and at a certain threshold they switch to a higher vibration state that can release an electron. If the starting state is random, the release of an electron will often coincide with the light that is transmitted from just one atom.

This threshold means that one "photon" can cause zero or multiple detections. This was tested by Eric Reiter in many experiments and he saw that this variation indeed happens. Especially when the experiment is tuned to reveal this. By using high frequency light for example. It happens also in experiments done by others, but they disregarded the zero or multiple detections as noise. I think the double detection effect was discovered when he worked in the laboratory with ultraviolet light.

Here is a paper about Eric Reiter's work: https://progress-in-physics.com/2014/PP-37-06.PDF And here is his book. https://drive.google.com/file/d/1BlY5IeTNdu1X6pRA5dnJvRq3ip6...


There are so many artifacts that could cause those observations that I emit serious doubts that's what is happening in those experiments.


Check out Transputers, that were programmed via Occam. They do most of the stuff that the article desires. Though its hardware is restricted to a matrix orientation.

Another option is Erlang. On the top level it is organized with micro-services instead of functions.

None of them are system languages. The old hardware had weird data and memory formats. With C a lot of assembler could be avoided to program this hardware. It came as a default with Unix and some other operating systems. Fortran and Pascal were kind of similar.

The most used default languages on most systems were for interpreters. So you got LISP and BASIC. There is no fast hardware for that. To get stuff fast, one needed to program assembler, unless there was a C-compiler available.


during the night


Looking forward for a new break-through. Will they find another Nobel-prize winning medicine? Like the very cheap Ivermectin that saved so many people from blindness (and various other diseases).


The problem with social media (and all media) is opinion-based censorship, causing group-think. And the chaos of replies that are uncategorized.

Different opinions do matter. But due to the algorithms, the most emotional responses are promoted. There is no way to promote facts or what people think are facts.

So most discussion will be extremely emotional and not based on facts and their value. This is even true in scientific discussions.

Combined with group-think, these emotions can grow and lead to catastrophic outcomes.


> There is no way to promote facts or what people think are facts.

There is no way with existing platforms and algorithms. We need systems that actually promote the truth. Imagine if claims (posts) you see come with a score* that correlates with whether the claim is true or false. Such a platform could help the world, assuming the scores are good.

How to calculate these scores is naturally the crux of the problem. There's infinite ways to do it; I call these algorithms truth heuristics. These heuristics would consider various inputs like user-created scores and credentials to give you a better estimate of truth than going with your gut.

Users clearly need algorithmic selection and personalized scores. A one-size-fits-all solution sounds like a Ministry of Truth to me.

* I suggest ℝ on [-1,1].

-1 : Certainly false

-0.5 : Probably false

0 : Uncertain

0.5 : Probably true

1 : Certainly true


> The problem with social media (and all media) is opinion-based censorship, causing group-think. And the chaos of replies that are uncategorized.

All people are biased. It's impossible to also avoid bias needed to filter out the firehose of data.

What your describing is often a form of moderation.

> Different opinions do matter. But due to the algorithms, the most emotional responses are promoted. There is no way to promote facts or what people think are facts.

This is tuneable. We have tuned the algos for engagement, and folks engage more with stuff they emotionally react to.

People could learn to be less emotionally unstable.

> So most discussion will be extremely emotional and not based on facts and their value. This is even true in scientific discussions.

I think your over fitting. Moderation drives a lot of how folks behave in a community.

> Combined with group-think, these emotions can grow and lead to catastrophic outcomes.

Group think is also how we determined mamales are mamales and the earth isn't the center of the universe. Sometimes a consensus is required.


I am thinking of some moderation systems that focuses on categorization instead of censorship.

There will be a bias in moderation, but that will have less of an effect when there is no deletion. If possible, the user could choose their preferred style (or bias) of moderation. If you want full freedom, you can let users select "super-users" to moderate/categorize for them.

Emotional responses and troll jokes could be a separate categories as long they do not call for violence and or break other laws.

Consensus is still group-think. I think it is destructive without any clear view where it stands within other options or other ideas. Like: "why exactly is earth not the center". A lot of consensus is also artificial due to biased reporting, biased censorship and biased sponsorship. During discussions, people within a consensus tend to use logical fallacies. Like portraying the opposition as idiots, or avoiding any valid points that the opposition bring into the discussion.

I think that people have becomes less intelligent due to one-sided reporting of information. With extra information, people will become smarter and more understanding of how other (smart) people think.


>categorization

This exists on Bluesky under "labeling" name: https://news.ycombinator.com/item?id=39684027


> People could learn to be less emotionally unstable.

How does it make sense to make billions of people responsible for abating the consequences of choices made by a few social media companies?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: