I have translated the pretty long Swift code belabored with manual laziness handling into Haskell where lists are inherently lazy:
type BitSequence = [Bool]
allSatisfy :: (BitSequence -> Bool) -> Bool
allSatisfy p = not (anySatisfy (not . p))
anySatisfy :: (BitSequence -> Bool) -> Bool
anySatisfy p = p (find p)
find :: (BitSequence -> Bool) -> BitSequence
find p =
if anySatisfy (\s -> p (False : s))
then False : find (\s -> p (False : s))
else True : find (\s -> p (True : s))
The code still works the same. But now, it is clearer what actually happens. The lexical call chain goes like this:
anySatisfy p
===>
find p
===>
anySatisfy (\s -> p (False : s))
===>
find (\s -> p (False : s))
===>
anySatisfy (\s -> (\s -> p (False : s)) (False : s))
===>
find (\s -> (\s -> p (False : s)) (False : s))
=== (by alpha conversion)
find (\s -> (\s1 -> p (False : s1)) (False : s))
=== (by beta reduction)
find (\s -> p (False : False : s))
where I used the long arrow to mean a deeper lexical call chain, and triple-equal to mean a single step of reduction and/or .
In other words, as long as the given predicate p consumes a finite prefix of the infinite bit sequence to produce a result, this will successfully return a result. Which is quite intuitive because the function is essentially doing an exhaustive search. It has nothing to do with topology or continuous functions or compactness, IMO. It also has nothing to do with Curry–Howard correspondence.
Here's an example when the function fails to terminate:
anySatisfy and
Here's another example:
let parity (False : s) = parity s; parity (True : s) = not (parity s) in anySatisfy parity
I have to make a correction in my earlier comment (can't edit any more). I previously stated that this has nothing to do with topology or continuous functions or compactness but in fact it does. It's actually a kind of Baire space. I'll try to more precisely explain this part that's only vaguely told as a story in the original article.
So topologically a continuous function (from a topological space to another) is defined as one where the preimage of every open set is open. (This definition for real numbers is equivalent to the traditional epsilon-delta definition.)
Now for bit sequences we can define a topology via neighborhoods: a neighborhood of a bit sequence is simply those that share a finite prefix. Then an open set means that for every point (bit sequence) in it, there's a finite prefix such that all sequences with this finite prefix are in the open set. Compare this with the traditional open set definition for real numbers: a set is open if for every real number in it, there's an epsilon-neighborhood such that all real numbers within this neighborhood are in the open set. In fact it's the same definition once we choose a definition for neighborhoods.
Then for booleans we just use the discrete topology so any subset is open.
So what I was saying in the parent comment (function only examining a finite prefix to produce a boolean) basically implies that it's continuous: for every possible output boolean, the preimage must consist of a union of neighborhoods with finite prefixes. So if a function only examines a finite prefix of a bit sequence, its preimage cannot be more than a union of neighborhoods.
(On the other hand, it is possible that a function chooses to examine more than just a finite prefix and still be continuous. In this case we don't know that the function will throw away information gleaned from examining the infinite sequence. It just won't terminate.)
> Here's an example when the function fails to terminate:
I figured this seemed too good to be true. In fact, isn't there a simple reduction to the halting problem? Make a "sequence"
i -> M halts in i steps
for an arbitrary Turing Machine `M`. You can implement the "bit lookups" by simulating `M`. The author says their Swift code can run `anySatisfy` on this "sequence" in finite time, right?
EDIT: quoted at the top, but afaict not mentioned later, the article says,
>> It is well known that it is impossible to define equality between arbitrary functions. However, there is a large class of functions for which we can determine equality
It would have been nice if the author had been explicit about the class of functions for which their program terminates.
It works for any decidable/recursive predicate on bit sequences, i.e. one that is guaranteed to halt for any input.
`and` on an infinite bit sequence is not guaranteed to halt because it is co-recursively enumerable, i.e. it will reject bit sequences that contain `False`, but it will not halt on bit sequences that contain only `True`, since to verify that a bit sequence only contains `True`, you have to check all elements of the sequence.
In ghci,
> and $ repeat True
<doesn't terminate>
> and $ (replicate 10000 True) ++ (False: repeat True)
False
I noticed my post missed the point a bit... I was thinking about functions that check properties of bit sequences, not functions that check properties of predicates on bit sequences.
I get how it works now, thanks. Wish I could edit/delete my old post :-/.
Just to check my understanding: a terminating program that decides bit sequences will have
- a maximum number of bit checks it will make regardless of input, and
- a maximum bit index it will access regardless of input, and even
It's worth noting that the references in the blog post—including "Seemingly Impossible Functional Programs" which the title refers to—are all written in Haskell. Haskell is the natural setting to experiment with this kind of code.
That's slightly wrong. When you substitute the definition of anySatisfy, you need to note that it uses the argument p twice, so you need to substitute both uses, not just the second one.
> It’s going to seem incredible, almost magical, but be assured you there are no tricks involved.
The trick involved is: BitSequence.find is a naïve brute-force search, but the predicates only check n bits (for some n), and laziness ensures that at most n bits are generated.
We run into the expected problems if n is large, or there is no such n. For example,
will never terminate. (Disclaimer: I don't write Swift.)
While it's nice to try smuggling in some mathematics at the end, I think a better conclusion for the article to draw is: if you ask a computer to do a brute-force search on some fixed number of bits, it will do it, even where has to compute this number of bits itself along the way.
If a predicate only looks a finite number, n, of bits of a bitsequence then you can brute-force search the space containing the (2^n) sequences shorter than n. And a predicate can't possibly look at infinitely many bits of a bitsequence as the function which resolves that predicate would never terminate.
See also “Seemingly impossible functional programs” by Martin Escardo, from 2007 ([1], also at [2]).
It covers similar ground and is referenced at the bottom of the article (along with three other references also by Martín Escardó), but I thought it worth mentioning explicitly because I remember reading it many years ago and finding it really fun.
> The BitSequence type holds infinitely many values. In fact, it holds an unconscionable number of values. It has more values than String does.
Assuming UInt is infinite precision and Strings can be infinitely long, there is an easy injection from BitSequence to String, therefore String is at least as equinumerous as BitSequence.
If we bend the definition a bit and allow UInt to be infinite precision (which it is not), there's no reason not to allow String to have infinite length as well.
Also, the article mentioned multiple times how awesomely large the cardinality of BitSequence is. But in fact, the cardinality is the same as that of the real numbers—both have cardinality $2^{\aleph_0}$. So its size is rather intuitively grasped.
> Assuming UInt is infinite precision and Strings can be infinitely long, there is an easy injection from BitSequence to String, therefore String is at least as equinumerous as BitSequence.
You’re right that a particular series of bits can be mapped to a string, but here the article is talking about the function that produces the bit sequence not the bit sequence itself. There are an infinite number of functions that can create a particular bit sequence, and each bit sequence can be mapped to a single string, so there must be more functions than strings.
> There are an infinite number of functions that can create a particular bit sequence, and each bit sequence can be mapped to a single string, so there must be more functions than strings.
I don't think this logic works. For example, there are an infinite number of "rational numbers" of the form np/p where n, p ∈ ℤ, and each of these can be mapped to a single integer (namely, n). But there are not more these "rational numbers" than integers.
When you have an injection from A to B, you only know that B is at least as equinumerous as A. Maybe their cardinality is the same, and maybe the cardinality of B is strictly bigger.
For your example, we can construct an injection from rational numbers to integers, as well as an injection from integers to rational numbers. Therefore, by the Schröder–Bernstein theorem they are equinumerous.
Considering cardinalities, functions and strings are equal. Using javascript notation, ‘fn => fn.toString()’ is an injective function from functions to strings and ‘str => () => str’ is an injective function from strings to functions.
A more handwavy, but conceptually sound argument is that these functions to generate bit sequences can themselves be represented as turing machines. And if they can be represented as turing machines, we can write them down as a string (or a bit sequence).
In general, a bunch of magic happens once you recognize (heh) that the sets of turing machines, strings (binary or otherwise), and integers are all countable and that you can biject between them fairly cleanly.
I think that probably in Swift each individual string is required to be of finite length, as in most programming languages. The set of finite (but arbitrary) length strings drawn from a finite set of characters is countable.
On the other hand this type BitSequence (pretending UInt contains all non-negative sequences) is the set of all countable infinite sequences of bits. That set, as you explained, has the cardinality of the continuum.
> String is a possibly infinitely long ordered list of fixed-size characters, in which case it is countably finite.
BitSequence, as defined in the blogpost, is an "infinitely long ordered list of fixed-size characters" (where the size is 1 bit in this case) - the set of all BitSequence values is infinite - and uncountable.
By the same logic, the set of infinite strings is also uncountable (as the GP says, it's trivial to do an injection from BitSequence to infinite strings, so it can't have lower cardinality).
Yes, you're right. I have amended my comment; I had meant to say that Strings must have finite length. The "meat" of my comment was an argument that Strings could not infinitely long, which was exactly the opposite of what I actually ended up saying ;)
Why? If your alphabet is [0-9] then it seems like your set is just the integers, and those are countably infinite. This case is even more restricted, where you only have two elements in the alphabet. So where does the continuum come in?
There exists no integer whose representation is an infinite sequence of digits. In other words, "1..." is not an integer, neither is "...1". Even though there is an infinite number of integers, every integer has a finite representation. On the other hand, the set of infinite bitstrings has a trivial bijection to the real numbers in the interval [0, 1) and thus to the real numbers as a whole. This is a key point in Cantor's diagonal argument.
Just to help me out (I don't doubt you, I'm just struggling to work it out) can to tell me what that trivial bijection from the set of infinite bitstrings to [0, 1) is?
I must correct myself a bit: what is trivial here is a surjection from bitstrings to reals, but this is plenty enough to prove that there are strictly more bitstrings than integers. Also, the mapping is to [0, 1] inclusive, as is shown below.
For any bitstring a = a₁a₂a₃a₄…, aᵢ ∈ {0,1}, simply prefix "0." and interpret it as the binary representation of the corresponding real number; in other words, construct the real number 2⁻¹a₁ + 2⁻²a₂ + 2⁻³a₃ + 2⁻⁴a₄ + …
Now, it should be evident that this mapping covers all the reals, but some reals have multiple representations. For instance, both 0.1 and 0.0111… represent the same number. So the function is a surjection. As another example, the bitstring of all ones, 1…, maps to the binary number 0.1… which is famously equal to 1. [1]
We're not talking about the size of single values, we're talking about the cardinality of the set of all distinct values of a given type. I.e. "How many different Strings are there?"
Yes, I'm aware of that. The set of all languages over a finite alphabet should be countably infinite, since you can list them all. For example, for the alphabet {0, 1} all your strings are just the binary representations of natural numbers, which is countable since ℕ is.
If you only allow arbitrarily long, but finite, strings then the set is countable. If you allow infinitely long strings then the cardinality of that set is uncountable. For a trivial example consider the string representation of every element in ℝ in the alphabet {0-9,.}
The point of the original comment was that if you consider BitSequences to be of infinite length, then for consistency you also have to allow infinite-length strings. Otherwise you're ignoring the limitations of machine representation for one type but not the other, which is comparing apples to oranges.
Here is a more imperative version (using exceptions):
The assumption is (as in the article) that each predicate terminates for every bit sequence and that the predicates are actual functions, i.e. they have no state. Also, predicates don’t catch exceptions.
We can implement find as follows: Start with n = 0 and generate the bit sequences with all prefixes of length n. Have them throw an exception if somebody tries to access an index past that prefix. This way, you get finitely many partial bit sequences. Apply the predicate to all these sequences. There are three cases:
- The predicate returns true for one of the partial sequences. Because it doesn’t catch exceptions, it only looked at values within the prefix. Hence, the result of the predicate doesn’t change if we change the rest of the sequence—for example, we can set the rest of the sequence to 0 and we have found a sequence that satisfies the predicate.
- The predicte returns false for all our partial sequences. Similarly to the case above, the result does not depend on bits beyond the prefix, so no matter what we do, we will not find a sequence satisfying the predicate and we can return any sequence (by the definition of find, it returns any sequence if the predicate is unsatisfiable).
- For some of our partial sequences, we get an exception. Then we increase n and try again.
This algorithm has to terminate: if it doesn’t, we get the third case each time. But taking the bit sequence that we get by always extending our prefix on a path that keeps throwing exceptions, we get a bit sequence on which our predicate does not terminate, contrary to our assumption that it always does.
This algorithm is only total if the predicates supplied are total. Since `x == one AND x == zero` is not total the algorithm will diverge. It will not return the incorrect answer though.
He lost me when he started talking about the Halting Problem.
How is checking all Swift.String or Swift.Int values equivalent to that? There are a lot of Swift.Int values to check (2^64, on a 64-bit platform), and even more Swift.String values (every combination of characters up to that length), but simply being infeasibly slow to run on your Pentium "in a reasonable amount of time" doesn't mean it's an "impossible function to implement, and also equivalent to the halting problem". We could easily write a function to enumerate all Swift.Int values (though it might take 100 years to run).
He throws out the parenthetical remark "it’s best to think of Int as modeling the infinite set of all integers" without explanation. When you're discussing whether something is theoretically impossible or not, thinking of a finite set as equivalent to an infinite set is brushing over a pretty major distinction.
But there are an infinite number of strings (and indeed every Turing machine may be represented as a String in swift), so it is indeed related to the halting problem.
Swift.String is composed of a finite set of symbols, with a finite maximum length. How can you create an infinite number of these?
The article ignores the distinction in many places, but "String" in a Swift function means "the Swift.String type", not "an abstract string type like those used to represent Turing machine state".
You're not wrong, but the author's code does in no way rely on the fact that you could theoretically iterate over all the valid Int values in Swift, so it would have made no difference if the Int type was somehow truly infinite.
And the author clearly acknowledges this:
>it’s best to think of Int as modeling the infinite set of all integers
Hmm, some questions from someone who isn't familiar with Swift or much functional programming:
- What does this have to do with Swift? Other commenters seem to agree that Haskell would be more appropriate.
- What does this have to do with functional programming? As far as I can tell, an algorithm about bitstrings has been wrapped up and reframed in terms of types. Could you not do this in C without too much headache?
- The recursion aspect confuses me a bit; is it possible to "find" a predicate like "all indices of the bitstring are 1"? Or is that not an allowed type of predicate?
- Again with the recursion, how does the "find" procedure know to terminate if no viable bit string exists?
EDIT: I should have read kccqzy's post more carefully. I believe it answers the third and fourth questions.
For the first two: not much, and somewhat. The commenters are correct that this logic is much more easily expressed in Haskell, since it has better support for this kind of programming. Swift is able to do this because it has lazy evaluation as a opt-in feature as well as the parts of the type system that matter; I don’t think you would be able to port this to C easily.
For your third question, that predicate doesn't work.
For your fourth question, I didn't quite explain that but it's due to laziness. All of this recursion happens inside a lazy function argument. So if p chooses to return True without inspecting more values from the bit sequence, the recursion is then stopped.
I don't get it. What prevents us from doing the exact same approach for the natural numbers? I.e. represent the naturals lazily via the successor function and use the same exhaustive search as the OP for BitSequence. Of course if the predicate doesn't stop evaluating successor(x) for any finite x then we're out of luck and the computation doesn't halt, but this caveat also holds for OP's BitSequence, as others have pointed out in this thread...
So this article shows how to check equality on `(nat -> bool) -> bool`. Your question, I think, is to figure out how to check `nat -> bool` for equality.
The issue is that there are more operations we can do with `nat`, more properties we can check, than there are with `nat -> bool`. With `nat -> bool` we can basically check `i` indices and so the behavior of any predicate `(nat -> bool) -> bool` is basically determined by the first `i` indices. And that's finite. This algorithm is basically implicitly finding the indices considered by the supplied predicate and then brute-forcing all those entries.
With `nat` there's no such finite cut off point. It's never the case that it suffices to test a predicate `nat -> bool` on finitely many entries to fully determine it. So we cannot do the same brute force search.
But a natural number can be viewed as a function from the natural numbers to {0, 1} via its binary expansion.
There's a trivial isomorphism between the natural numbers and binary sequences with finitely many ones, so a predicate of type `nat -> bool` can be viewed as a predicate of type `(nat -> bool) -> bool`.
This article does a great job at pointing out just how needlessly difficult computer science makes itself when it doesn't have to. Example:
!(a || b) == (!a && !b)
`a` and `b`. They're meant to represent, well anything really but to the average person (or weirdos like me) it would be infinitely easier to understand if the author wrote it like this:
!(Bob || Sally) == (!Bob && !Sally)
I don't know why but whenever I encounter single-characters-as-math-variables in computer science I have to spend far more time unpacking such statements into things my brain can understand. It is so much easier if such statements are written as nouns that describe actual things.
I remember having a hard time working with C for loops until a friend of mine wrote one like this:
for (int number = 0; number < 10; number = number + 1)
Suddenly it all made sense! I am not alone in this! I have taught programming to kids and others who have struggled to learn programming and this seems to be how normal people think.
Note: We, as a geeky community are not normal. I just have a strange exception in that at least one little part of me thinks like a normal person and that is with "what we name things" :)
> It is so much easier if such statements are written as nouns that describe actual things.
But there is nothing in 'Sally' or 'Bob' that is more descriptive than 'a' or 'b' in this case; in fact, using 'Sally' or 'Bob' implies that whatever you're talking about specifically involves people, however that is of course no the case.
Using non-descriptive single letter names makes total sense in a context like this, since the only semantic associated with those particular elements is equivalence.
> But there is nothing in 'Sally' or 'Bob' that is more descriptive than 'a' or 'b' in this case; in fact, using 'Sally' or 'Bob' implies that whatever you're talking about specifically involves people, however that is of course no the case.
I will trot out "it is standard pedagogical practice..."
If someone doesn't have an abstract mental framework (that mathematics and logic require for understanding) to hang stuff like this on, then it is very helpful to restate things in terms that they are familiar with.
Once they understand the specific, then its time to make it abstract: take what they now understand and generalize it - say from people, to predicates in general.
> Once they understand the specific, then its time to make it abstract: take what they now understand and generalize it - say from people, to predicates in general.
What does "!Bob" mean, though? Are we negating the person, or their name? Neither one makes sense to me.
Maybe I'm too far gone down the computer science rabbit hole, but this is far more confusing to me than "!a".
> Are we negating the person, or their name? Neither one makes sense to me.
This is a fair statement. I think a treatment of predicates would be necessary before getting to using names as predicates. "When we say 'Bob', we mean 'Bob has enough money for a bus fare'."
Then, if we want to say "Bob doesn't have enough money for a bus fare" we can express it with !Bob (which means, in this case 'not Bob').
I agree that using names can be confusing. I think meaning needs to be addressed first.
I don't think you really need to take a detour into discussing meaning if you used meaningful names for the construct, though. For example, since here we're talking about a boolean expression, name the variables like you would boooleans - such as "isBob".
It makes it WAY easier to understand exactly why !(a || b) == !a && !b.
Bob and Sally are things that people already understand and relate to. a and b are not, so there is an extra layer of indirection there when attempting to reason about them.
> Bob and Sally are things that people already understand and relate to.
People understand they're people's names. But that's not a relevant detail for the equation, so why add that to the problem? It complicates, rather than simplifies, my understanding of it.
> !(Milk || Sugar) == (!Milk && !Sugar), duh.
I don't get what the 'duh' is. You've added nothing to the equation! They're just different names!
`with_milk` and `with_sugar` are booleans that people have (lots of) experience with. this seems to help us think about this stuff – we've seen how milk/sugar "work" many times, and we've already internalized some model of that situation – we already "get it" on an intuitive level. hence, reusing that model for general Boolean variables reduces the cognitive effort required[0].
for a similar example, see [1]. summary: a study presented people with a logical problem. they failed miserably if the question was about abstract math-y stuff, but got it right most of the time if the variables were like `X is underage` and `X can buy beer` (explained in the "Policing social rules" section)
---
[0] as a bonus, we can quickly sanity-check if our reasoning is correct – if there's `no (milk or sugar)` in my coffee, there's obviously `(no milk) and (no sugar)` in it!
No. a and b are things I better understand - they're just "two facts otherwise unspecified". Sally and Bob bring with them all this baggage about humans and names and stuff, so it's kind of stupid to use them as descriptors. Two things: firstly, everybody and their grandma knows about de Morgan's laws, so this is a bad example, and secondly, humans best learn abstract ideas through concrete examples, yes, but if you really need to explain something using examples like "Milk" and "Sugar", then that is best done in comments, not in the code, whose very purpose is to unambiguously specify the abstract idea.
This brings up an interesting idea, though - are there any programming languages that "learn" what the user wants by example? What would that even look like?
I'm with them, and if I had to boil it down to one sentence: Zero mental overhead for representing the idea.
I think visually for anything logic-based - shapes, lines, venn diagrams, etc - so using such abstract no-meaning variables gives me nothing to put into that mental image. So unless the logic concept is familiar enough, I have to invent something for the variable before I can place it in the construct, in order learn the construct.
Once the logic construct is familiar to me, I have no issues with the abstract variables at that point.
I think you're seeing the influence of math on programming conventions. If I think of something like (!a && !b), I can immediately start doing boolean algebra in my head. I can manipulate it like an algebraic expression and get the job done 10x faster than if I had to work through the "real" meaning.
Anyone's productivity in programming can be enhanced by taking a mathematical outlook, although I suppose the beauty of programming is that you don't have to.
To throw in a Dijkstra, "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." What do you do when certain semantic levels require some mathematical background - do you choose a less precise abstraction or do you just require that background? I'm not sure what the answer should be for this kind of blog post, but at least intro-to-programming materials should be as accessible as possible.
Because mathematics is more about the abstract patterns and structures, rather than the meaning of individual variables (as it is for programming).
More bluntly, it doesn’t matter what a and b are here, only the overall meaning of demorgans laws. Arguably, giving them names that are any more than minimal signifiers actually detracts from what’s important I.e. the relationship of the law is important, not the variables Bob and Sally. Personally, having the variables called bob and sally actually makes it harder, because the whole expression immediately becomes far more verbose.
> It’s so large that it can hold an infinite number of disjoint copies of the natural numbers inside it!
Nit: you can embed infinite copies of the natural numbers inside the natural numbers as well, with room to spare. Consider the primes raised to integer powers.
> the negation of a disjunction is the conjunction of the negations
As a non-English speaker I hated this definition until someone told me to remember
> 'break the line, change the sign'.
This is of course referring the representation in boolean algebra, where, NOT(A AND B) is represented with a single long horizontal line on top of (A AND B). This becomes NOT(A) OR NOT(B), i.e., Ā | B̄ according to DeMorgan's law.
In university I had a course (curse?) called "logic in formal systems" (or something like that); I doubt many using only the course's materials truly understood the logic principles demonstrated there. Coincidentally, the course insisted on using ∧, ∨, ¬ etc., while another course also dabbling in a lot of logic using natural symbols was far better in terms of presentation and explanations. (To this day I have to look ∧, ∨, con- and disjunction up to be sure which is which).
He means that it's impossible to write such a function from the natural numbers. It's possible for all finite types such as UInt in Swift.
In general, you can do this exhaustive search with any "compact" type and there are a lot of compact types. In particular, the (total continuous) functions from a discrete (i.e. a type with decidable equality) into a compact type are compact. And as the article shows, the (total continuous) functions from a compact into a discrete type are discrete. Together with the observation that the type of natural numbers is discrete and that every finite type is compact and discrete already gives you infinitely many compact types to play with.
One caveat with this whole work (which goes back to Martin Escardo by the way) is that this doesn't work with general recursion. E.g. in a language with general recursion you can write a program
kleene : (nat -> bool) -> bool
which computes the paths in the Kleene tree, where roughly "all total computable paths are terminating, but all uncomputable paths diverge". However, if you have a (total) language with, e.g., only structural recursion, everything works out and you can apply this epsilon operator to arbitrary programs.
Thanks for your reply! I think I understood some of your comment, but got very lost at the end. I don't know anything about: Kleene trees [1], "total" languages [2], general recursion vs structural recursion [3], or epsilon operators [4]. (But I've looked those up and provided some links.)
I think I understood some of the first paragraph. I'll have to do some mathematics and computer science courses on Khan Academy.
Even if it is balderdash, isn't there value in analyzing(/refuting) it? I submitted the post (it's not mine) because I was interested in hearing people with more maths than I have expand/comment on it -- perhaps even people who had read the original papers.
If there's something particular in the post that you find problematic, I personally am all ears.
The author is correct that checking if two functions having infinite domains are equal is often impossible. He also has correct statements of some mathematical theorems. But people in this thread have provided counterexamples to his claim. That means he needs to fix the mistakes he made.
If you are interested in what serious mathematicians have to say on this topic see https://homotopytypetheory.org/.
It is possible in certain cases, that's the point of the article. One of the contributors to HoTT is the author of the paper introducing these algorithms :)
In other words, as long as the given predicate p consumes a finite prefix of the infinite bit sequence to produce a result, this will successfully return a result. Which is quite intuitive because the function is essentially doing an exhaustive search. It has nothing to do with topology or continuous functions or compactness, IMO. It also has nothing to do with Curry–Howard correspondence.
Here's an example when the function fails to terminate:
Here's another example: