The actual reason here, implied but not stated outright in that one, is that Discord being a public platform, having only numbers to discriminate between users makes it extra-trivial to impersonate someone else. Obviously you can still do some of this with unique usernames (you see slight misspellings, adding harder-to-see characters like periods, etc, as strategies), but these are more complex to execute on at scale and easier to block once and reduce the impact, vs being able to use ~arbitrarily many post-username numbers.
Not saying that wasn't ONE of the reasons but the main reason was really that a large chunk of users had no idea that they even had a discriminator, as it was added on top of your chosen username. "add me on discord, my username is slashink" didn't work as people expected and caused more confusion than it was solving. This wasn't universally true either, if you come from a platform like Blizzard's Battle.net that has had discriminators since Battlenet 2.0 came out in 2009 it was a natural part of your identity. End of the day there were more users that expected usernames to be unique the way they are today than expected discriminators.
Addressing that tension was the core reason we made this change. We are almost 3 years past this decision ( https://discord.com/blog/usernames ) and I personally think this change was a positive one.
That's not the perfect defense you think it is. Plenty of robots.txts[1] technically allow scraping their main content pages as long as your user-agent isn't explicitly disallowed, but in practice they're behind Cloudflare so they still throw up Cloudflare bot check if you actually attempt to crawl.
And forget about crawling. If you have a less reputable IP (basically every IP in third world countries are less reputable, for instance), you can be CAPTCHA'ed to no end by Cloudflare even as a human user, on the default setting, so plenty of site owners with more reputable home/office IPs don't even know what they subject a subset of their users to.
> If you have a less reputable IP (basically every IP in third world countries are less reputable, for instance), you can be CAPTCHA'ed to no end by Cloudflare even as a human user, on the default setting, so plenty of site owners with more reputable home/office IPs don't even know what they subject a subset of their users to.
Or if you have a less common browser like Firefox with some moderate privacy settings/extensions.
That is funny because on this page there is a warning block with the following text:
Refer to Will Browser Rendering bypass Cloudflare's Bot Protection? for instructions on creating a WAF skip rule.
And "Will Browser Rendering bypass Cloudflare's Bot Protection? " is a hash link to the FAQ page, that surprisingly doesn't anything available for this link entry.
Is it because it was removed (/hidden) or because it is not yet available until everyone forget the "we are no evil, we are here to protect the internet"?
most websites, particularly those behind cloudflare, are very restrictive even to crawlers that obey robots. Proof: a ton of my time over the last year, and my crawlers very carefully obey robots.
It's hard to see how this isn't extorting folks by offering a working solution that, oh, cloudflare doesn't block. As long as you pay Cloudflare.
Perhaps I'm overly cynical, but I'd be quite surprised if cloudflare subjected their own headless browsing to the same rules the rest of the internet gets.
>most websites, particularly those behind cloudflare, are very restrictive even to crawlers that obey robots. Proof: a ton of my time over the last year, and my crawlers very carefully obey robots.
The docs are pretty equivocal though:
>If you use Cloudflare products that control or restrict bot traffic such as Bot Management, Web Application Firewall (WAF), or Turnstile, the same rules will apply to the Browser Rendering crawler.
It's not just robots.txt. Most (all?) restrictions that apply to outside bots apply to cloudflare's bot as well, at least that's what they're claiming. If they're being this explicit about it, I'm willing to give them the benefit of the doubt until there's evidence to the contrary, rather than being a cynic and assuming the worst.
This makes no sense as an explanation. They changed the architecture not to infringe on the patents. So the patents are not stopping them from opening up now.
Apple and Google and any app store provider have the ideal goal of zero friction for real, valuable apps and infinite friction for bad, scam apps. They can never hit that ideal, but when you're getting rage from both ends it's likely that you are in a place on the continuum that is far below ideal—you make it a huge pain for real, valuable apps and too easy for bad, scam apps. This appears to be where the Apple store is, at least, and it's an unfortunate place to be. They may be doing their best, but it sounds like their best has some pretty significant room to improve.
Epic didn't publicly criticize Apple or testify against them in court to get into this situation, they willfully and deliberately broke the legal developer agreement that they signed to get press coverage (they could have filed suit on the anti-steering rules regardless).
Not only did they do this, they then filed suit to say that Apple shouldn't have been allowed to suspend their account—and lost (though arguably won the broader war since anti-steering is currently dead).
There are a ton of things Apple is doing wrong around developer stuff and anti-steering rules and all of it, but I dunno, I feel pretty good about them saying to a specific developer, “actually, you've shown yourself to be willing to ignore the legal agreements you sign, so we're not going to be doing business with you any longer“. Epic's stunt should cost them, if they then want to talk about how they've martyred themselves for developers everywhere. Good work, but a martyr who comes back to life isn't really a martyr, right?
While I don’t claim to know the finer points of the law, I believe the judge was pretty crystal clear that Apple was 100% within their powers to kill the developer account that Epic used to do this.
You can reflect TS types out of it. There are 3rd party libraries to generate JSON Schemas from Zod objects, which is helpful if you have non-TS clients you want to support
Ajv has supported that for at least a couple of years afaik, and consumes JSON Schema natively which is good for consuming other APIs, not just feeding external clients—its base data format is interoperable, basically.
That’s mostly why I’m curious about the lack of mention :)
That seems to show that you have to bring your own types for JSON Schema still, as evidenced by their example both explicitly defining the interface and then passing that it as an argument.
I wasn’t aware however of JSON Type Definitions, which hadn’t been invented last time I released software with Ajv, but it does appear to be able to reflect those as well as validate from them, so thank you for showing me that.
Ah, I think I misunderstood you. Yes, this does mean that you need something else to define the typescript to json schema conversion—either by using another tool or by starting from json schema and getting to the typescript types you want.
Feels like it’s worth that trade off to have a consistent experience consuming other APIs as well, but I could be wrong; I think so far I’ve only used it when I need to consume APIs rather than produce them.
I'm not 100% sure, they most likely scraped the author emails of all NPM packages that (transitively) depend on ajv. Here's the GitHub issue from back then: https://github.com/ajv-validator/ajv/issues/1202
Just to make it explicitly clear, I only received one email - reading my earlier comment back, it made it seem like there maybe was more. It could have definitely been worse!
Thesis is a cryptocurrency venture studio whose mission is to empower the individual—we seek, fund, and build products using cryptocurrency and decentralized technology that further this mission. Current and past Thesis projects include Fold (2014), tBTC (2020), Taho (2021), Etcher (2023), Embody (2023), and Thesis Defense (2024). Investors in the company include Andreessen Horowitz, Polychain Capital, and Draper Associates, among others. We are a remote-first company, led by founders who have been operating in the cryptocurrency and web3 space for a decade (actually for a decade ;)).
Our current focus is on building Acre, a Bitcoin-in Bitcoin-out BTC staking platform, and Mezo, an Economic Layer for Bitcoin. Across the board, we are focused on building a new home for Bitcoin holders to cultivate Bitcoin and grow wealth together. Our projects are built with an emphasis on creating something useful with a clear value proposition rather than a perfect technical machine that provides unclear value.
We’re a fun, down-to-earth, fast-paced and highly collaborative team looking to expand our engineering (Go, Solidity, and TypeScript) and product capabilities (amongst other disciplines) and this is where you come in. Join a team that strives for excellence and help us build technology that enables the integrity and empowerment of the individual.
reply