They make sense where you want essentially linear time additions and deletes regardless of list size. They also play well with parallel systems, as there exist lock-free implementations.
I'm happy to buy the argument that they aren't a great default for cases where you know you will be dealing with a small unordered collection. Storing HTTP headers? Don't use a linked list. Large enough to benefit from binary search? Don't use a linked list.
In general I'd agree with you that a simple table is a better first approach. Brute force and realloc can be surprisingly efficient. Others have made the argument that a hash beats a table even for very small numbers, and thus is a better default: http://news.ycombinator.com/item?id=1860917
The Naggum article (parent of the linked comment) is a fun rant in favor of linked lists over hash tables, although I think Naggum would probably also agree with the argument that you shouldn't use even a linked list when a simple array will do.
What's "large"? Even at thousands or tens of thousands, vector-style lists can be a win simply because they're faster to iterate through, they optimize locality, and they don't require pointer chasing.
At very large sizes, deletions from the interior can be painful, but at very large sizes you need to be customizing for performance (if that matters) anyways. Instead of deleting, for instance, you can invalidate, and then amortize the memcpy across many deletions; a straightforward time/space tradeoff.
The big problem I have with "performant" linked lists is that malloc is death to performance. The single biggest profiler payoffs I've ever gotten --- order of magnitude improvements --- have been from getting rid of calls to malloc. Yes, you can custom-alloc a linked list, but you're then getting to a place where lists and vectors are starting to converge on each other.
I think C programmers use linked lists instead of "vectors" because realloc resizing is scary, and because every CS student learns linked-lists in their first month of class.
The C-style linked lists that TFA discusses don't involve any dynamic allocation. They're intrusive structures that link locations together, not containers that are responsible for memory ownership and management.
Intrusive data structures are actually very malloc friendly because you can include multiple lists in structures that are allocated sequentially, and insert/remove at will without any allocation or deallocation whatsoever. This is of course why C programmers use linked lists heavily.
If those CS students pay attention in their algorithms class, they'll learn that the amortized cost of insertions and removals at the end of a load-based resizable array is still constant anyway. As you say, the overhead of a malloc tremendously dominates a few instruction's worth of logic for capacity checks.
I'm happy to buy the argument that they aren't a great default for cases where you know you will be dealing with a small unordered collection. Storing HTTP headers? Don't use a linked list. Large enough to benefit from binary search? Don't use a linked list.
In general I'd agree with you that a simple table is a better first approach. Brute force and realloc can be surprisingly efficient. Others have made the argument that a hash beats a table even for very small numbers, and thus is a better default: http://news.ycombinator.com/item?id=1860917
The Naggum article (parent of the linked comment) is a fun rant in favor of linked lists over hash tables, although I think Naggum would probably also agree with the argument that you shouldn't use even a linked list when a simple array will do.