I love how a business can focus on doing one thing well, recruit some awesome talent, and end up doing some great R&D and pushing the envelope in their specific field. It's great to see companies like this succeeding.
So DDoS could be...
1) an L3 packet you would drop, but still saturating your uplink (e.g. DNS amp)
2) a request for some static asset (img, html, octet...)
3) a request for a dynamic page (hitting your app farm)
CloudFlare provides something like an automatically populated CDN which includes defense against #1 and #2, and they're using distributed data centers, and servers with 10GE and SSDs to run their network. They said 23 data centers (locations), but didn't mention how many of these servers they run.
Apparently they are able to run HTTP sessions over IP ANYCAST without any issue (I read claims you could only do UDP), so that's pretty cool. BTW - I wish it was easier to setup ANYCAST on your own... it seems like a major investment at the moment.
It's interesting they don't need more CPU power -- I would have expected more CPU would be deployed, since having minimal CPU could provide an attack vector.
Another thing I'm curious about is how they are distributing load between those cute servers they built? Do they segregate specific customers/domains onto specific IPs and then route those IPs to specific boxes in each data center? Or does everything come in "equal" and then get round-robin/least-load divided up by some massive load balancers? Basically, I wonder how much of the load balancing do they try to do "client-side" or "client-based" versus how much do they do strictly on the back-end, and what devices they are using for it?
As far as CPU, there isn't a whole lot of cycles dedicated to decoding and replying to network packets, and serving static content is incredibly trivial. The interrupts from high loads of network traffic (usually small packets) are arguably the biggest impact on the CPU, which is why you get network cards that offload L3 processing much more efficiently than your CPU can. Network appliance vendors rely on them to help do things like transparently filter traffic on 40GB/s interfaces in real time, which would probably be impossible with a normal CPU.
They mentioned in a previous post that they mainly rely on L3 routing to load balance. And I don't remember if they specified this, but they hinted in the post that all customer traffic can be served by any frontend box; it just pulls the customer data from their main storage and caches it on frontends as needed, as any good proxy does.
So DDoS could be...
CloudFlare provides something like an automatically populated CDN which includes defense against #1 and #2, and they're using distributed data centers, and servers with 10GE and SSDs to run their network. They said 23 data centers (locations), but didn't mention how many of these servers they run.Apparently they are able to run HTTP sessions over IP ANYCAST without any issue (I read claims you could only do UDP), so that's pretty cool. BTW - I wish it was easier to setup ANYCAST on your own... it seems like a major investment at the moment.
It's interesting they don't need more CPU power -- I would have expected more CPU would be deployed, since having minimal CPU could provide an attack vector.
Another thing I'm curious about is how they are distributing load between those cute servers they built? Do they segregate specific customers/domains onto specific IPs and then route those IPs to specific boxes in each data center? Or does everything come in "equal" and then get round-robin/least-load divided up by some massive load balancers? Basically, I wonder how much of the load balancing do they try to do "client-side" or "client-based" versus how much do they do strictly on the back-end, and what devices they are using for it?