Hacker Newsnew | past | comments | ask | show | jobs | submit | 38kkdiu's commentslogin

Looks very nice, although it always takes me a bit to figure out what they're talking about with these sorts of things because I have to remind myself that most ML/DL stuff is supervised. What I research is unsupervised.

They kind of have this weird dissing of unsupervised scenarios, though. It's not like supervised or unsupervised is better or worse, they're just surrounding different problems. They can talk up their product without needing to criticize a problem domain.

It's like if you were making motors for boats, and then started talking about "these crazy people who think it's better to fly." ???


I see how that came across as obnoxious, so thanks for the perspective.

I do think there's a pretty common failure mode for teams who don't have much experience with ML, though. Teams who don't have much experience with ML often take "We don't have much data" as a parameter of their problem, and don't see that this is something they can decide to change. This can lead to a lot of time spent experimenting with different unsupervised approaches that are a poor fit for what they're trying to do.


Can you say a bit more about what you mean by "it's very dangerous with the amounts of noise"?


Network accuracy crashes


A link to the actual paper:

http://www.nature.com/nature/journal/v541/n7638/full/nature2...

This method has actually been discussed for some time now, and I'm surprised the abstract wasn't written that way. I think it was referred to as the "oracle method" in a prior publication, although it's been awhile. I tried looking for it but realized all the keywords I would think of first bring up many unrelated hits.

The previous paper I'm thinking of couched it in terms of identifying experts, I think, under the premise that experts should both understand others' predictions and have a more accurate prediction, and therefore should be weightedly more heavily. But the upshot was the same: that surprisingly correct answers were more likely to be accurate.

I might be remembering the details incorrectly, but my first thought when I saw this was "oh, I wonder if this is a new paper by that same group."


The problem is that there's nothing unambiguously non-typographic about the design. If they had distorted the symbols so much that they were outside the realm of normal unicode characters, sure. But anyone with that font could reconstruct the logo.

Stuff like this works when the glyphs are recognizable enough, but when the design is such that you simultaneously know that it is not to be interpreted literally. It fails here because the latter isn't clear.

So in this case it is typography. That's the problem.


Actually, that review just underscores a problem I have with modern design, which is a focus on aesthetic elegance and cleverness over real usability concerns.

I love design, so I'm not meaning to bash design per se, but every field has its problems. With design, it's a lack of attention to usability as a scientific, empirical, psychological, cognitive, physiological sort of issue. There's too much of a focus on cleverness and theorized or actual aesthetics over real usability.

Design sits at this funny intersection of art and science, and it seems like too often the former is emphasized over the latter.

This design is a perfect example of this, and the review reinforces it. Where's the discussion of usability and confusion? When did they reach out to naive users and pose real-world use tasks involving the logo redesign? When did the review discuss this?

The logo redesign is great unless you acknowledge that it co-opts symbols that have real meaning and can be ambiguous in that regard. Mozilla was well-intended but failed.

To me, this whole discussion is a perfect example of the foibles of the design world.


> Actually, that review just underscores a problem I have with modern design, which is a focus on aesthetic elegance and cleverness over real usability concerns.

As a full-time designer who graduated from a fine arts college (read: least-user focused possible), this is completely false regarding what I and all my peers do.

Designers don't set out to be "clever" – we're not programmers, after all. We set out to solve a ritual problem of meaningfully conveying ideas and messages to a target audience, wide or narrow. The reasoning is pretty simple.

Readability (different from legibility) is an incredibly important part of designing with type, and your lede basically invalidated all else you have to say, especially as followed by "I love design."


Damn, thank you for that link. Satisfying to see what's on my mind so much lately on the page.

The article and actual paper being discussed were both fascinating and satisfying to me. Predictably, maybe, the paper is a lot more than the summary article.

To your point, maybe, after skimming the paper, this comment in the Technology Review summary struck me: "Curiously, the same model accounts for both phenomenon. It seems that the pattern behind the way we discover novelties—new songs, books, etc.—is the same as the pattern behind the way innovations emerge from the adjacent possible. That raises some interesting questions, not least of which is why this should be."

It seems to me that at some level, it should be impossible to really know if something is a novelty or innovation, in that what defines an innovation is novelty exhausted over all the possible observers or something. You always have a frame of reference (in this scheme, an urn), and what you observe is novelty. You might infer innovation if an observation (in this scheme, a new color) is new over all the observers (over all the urns). If the observers are sufficient in number and sufficiently diverse, it becomes harder to determine whether the novelty is an innovation or not. Their example (as far as I can tell) assumes an urn, and the question of determining a novelty or an innovation is like having numerous urns, and estimating whether or not a new color is just new to that urn, or new to all the urns, including urns that haven't been examined.

The paper itself leads with some discussion of how this problem relates to statistical inference, which I find fascinating. They don't really get into it very much, but it leads to some interesting questions, like how to make inferences when the event/sample space/domain itself is random or unknown. Also relates to information-theory questions involving code alphabets that are unknown or indeterminate (for example, http://www-ee.eng.hawaii.edu/~prasadsn/patterns.pdf).


I agree with others that everything has always been post-fact in a certain sense.

The real problem now is that opinions and attitudes are being driven so much by ideological group membership. That is, people are explicitly or implicitly making decisions on the basis of "is this the sort of thing that X sort of person would or should do?" rather than "is the sort of thing that is in my best long-term interests?" They're taking for granted what is consistent with their ideological identifications, which often aren't flexible or nuanced enough to deal with the real world.

To be clear, I see this as happening on both sides of the political spectrum, although I admit in the US I see it as becoming more extreme with social conservatives.


I definitely see this happening on the left as well (with identity politics). Makes sense that the response on the right follows suit.


Yeah, being in my 40s and looking at that, my immediate thought was "oh, it seems I'm trying to figure out plan Z, even when I had pretty good plans A and B."

My advice from the other side is that there's too many unknowns to plan out your career. Fields shift in terms of content and culture, and you don't know what you like until you're really in the middle of it.

By a lot of metrics, I've achieved plan A, but found out, after a lot of work, that I am not a good fit for it and that plan B has a lot of similar problems. I never had a plan Z.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: