Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The trouble (probably not for basic math or even low-level analysis) is that what we refer to as "logic" or a logical reasoning error isn't uniform across different axiomatic systems, and the specific interpretations is a fairly human, social activity which cannot be readily "checked" in many cases for correctness against some baseline notion of "logic". The symbol φ, for instance, has a variety of significations in different context (the family of sets of all functions, probably something in physics idk), which our interpreto-bot (GPT) might not be able to both logically integrate and loosely interpret at the same time. Humans have the capacity to apprehend both consistent and complete logical systems: they interpret at the level of the text (at the level of the weave of signification), and any generally intelligent AI would have to mimic that behavior of the constant, on-the-fly dynamic changes to its network at the appearance of every new signifier in the same way as a human does.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: