Hacker News new | past | comments | ask | show | jobs | submit login
The largest DDoS attack to date, peaking above 398M rps (cloud.google.com)
751 points by tomzur 7 months ago | hide | past | favorite | 468 comments



Related ongoing threads:

The novel HTTP/2 'Rapid Reset' DDoS attack - https://news.ycombinator.com/item?id=37830987

HTTP/2 Zero-Day Vulnerability Results in Record-Breaking DDoS Attacks - https://news.ycombinator.com/item?id=37830998


Who has an incentive to carry out these DDos attacks? Why would anyone be willing to spend large amounts of money and develop a sophisticated attack against corporate cloud infrastructure? It seems like the only reasonable answer is foreign governments. But still what is the result - you inconvenience American tech companies and their customers for a few hours? This happens all the time, so clearly someone finds it worthwhile. Can anyone help me understand?


I've been working on anti-DDOS off and on for 20 years now. The answer is sometimes government actors, but oftentimes scammers in Eastern Europe. They do these big attacks for street cred amongst the botting community.

They then use their street cred to get paid by less scrupulous actors to attack their rivals. Sometimes the people paying are governments, sometimes just shady companies. For example last year there was a lot of crypto companies attacking each other's websites.

Most of the people who do this have a lot of technical skill but not a lot of opportunity to get paid for it based on where they live or the circumstances of their upbringing.


Very useful, thanks. Do you know roughly what sort of resources, in time, money, and compromised machines, it takes to do something like this? (Order of magnitude.)



So a single machine can do ~ 20,000 rps?


Depends, but there seems to be a multiplier effect at play with this attack. A single client request may result in 100x the work for the server. More details here: https://cloud.google.com/blog/products/identity-security/how...


> Another advantage the attacker gains is that the explicit cancellation of requests immediately after creation means that a reverse proxy server won't send a response to any of the requests. Canceling the requests before a response is written reduces downlink (server/proxy to attacker) bandwidth.

How is this an advantage? Can someone explain please?


It's an advantage because you as a botnet client have made the server side do extra work. You sent two packets, one to request a new connection, and a second to immediately cancel the request. The server on the other hand sees a connection request and does some work like allocating memory and fetching the resource you requested. Once the server starts sending the response back to the client via the reverse proxy, the reverse proxy notices the request is no longer current and just drops the response on the floor. As a result, you made the server do some amount of work and you don't have to worry about saturating your internet connection. They call this a magnification attack because for the cost of two requests you made the server do some multiple of work.

You could add some smarts to the server or reverse proxy that delays starting work in case a cancellation request quickly arrives. This is probably part of the mitigation work they refer to in the article.


The attacking system is shooting a firehose of requests at the target system, but doesn't have to deal with handling any responses being sent back to the requesting systems.


Makes sense, thank you!


This is sort of an aside based on something I read in the article but does anyone know why the RFC guidelines say that you should first send an informational GOAWAY that does not prevent opening new streams when gracefully closing a connection?

They point out in the article that it's a better practice to immediately limit stream creation when you detect abuse - not wait for a round trip to complete first. I'm sure there's a good reason for the original guidelines; I'm just trying to get it and haven't found anything clarifying through Google. Was it specified before the rise of modern attacks?


How do you act on a GOAWAY you haven't received and allowing the server to unilaterally stop supporting things can lead to weird edge cases.

After all you could say any client ignoring a GOAWAY is either bugged or malicious but certainly not until you get confirmation they go it.


Yep: "a client can send a RST_STREAM frame for a single stream. This instructs the server to stop processing the request and to abort the response, which frees up server resources and avoids wasting bandwidth."

Pretty clever


Of course. Depending on the machine and on the network, a single modern machine can do even up to a few millions RPS. This is routinely used in benchmarking tools.

Here with the "attack", it's simply exploiting the ability of HTTP/2 to compress requests and reduce them to just a few bytes, meaning that within a few kilobytes of data you can easily have hundreds of requests. Again this is not new and was already being discussed in 2012 about SPDY's use of zlib to compress requests.

The extra stuff that seems to have made this attack "new" for such service providers is that attackers took care of closing their requests so as not to have to wait for a response and be able to fill the wire with a flow of request. Again this has been known from the inception of HTTP/2 and routinely met by those dealing with proxies which timeout and send headers followed by rst_stream.

Here it makes noise because new records were broken, and likely because the stacks in place were not properly designed so they omitted to check for the real number of streams and only focused on the protocol validity...


If this attack takes only 20,000 machines to orchestrate, how is it the "largest DDoS attack to date"? It was to my understanding that some botnets and organizations have placed far more than 20,000 machines under their control before or currently. Could you explain a bit better?


I think they're measuring "largest" by the number of requests per second.


But that's part of my question. If other botnets have previously been created with many more than 20,000 controlled machines, why has nobody else before orchestrated an attack with so many requests if it only requires 20,000 machines or so to pull off?


This is exploiting a new technique. This technique wasn't known previously, or wasn't viable when HTTP/2 wasn't widespread. This technique is a ~100x multiplier on the impact any size botnet can have. A previous botnet might have needed 20,000,000 machines to achieve the same impact.


Thanks for explaining in such a straightforward way, appreciated!


They compromise home internet users and/or their IOT devices mostly with scripts and malware. So the investment for the scammer is mostly in researching exploits and seeding their malware. Most of them just use exploits created by others, but the best ones with the biggest networks are actually very capable security researchers. Given different circumstances they could probably be highly paid engineers.


Some of their effort goes toward maintaining an exclusive hold on their botnet too. Patching them while maintaining control or blocking the vulnerability they used from being utilized by others.


Wow, I guess sometimes the real world is as cool as the movies


Not cool at all, just a waste of talent.


They should work to increase ad revenue by subliminal messaging and lootboxization instead!


i feel like this is 99% of capitalism


Crypto companies attacking eachothers' websites?! Color me surprised …


Seems like attacking Google would be a bad target for street cred as compared to govt websites.


Surely bringing down Google is a bigger technical achievement than some random government website maintained by someone who stumbled into their job after 20 years doing mid level government organizational work.


Heck, I'd imagine that making headlines and having Google benchmark your attack would bring some amazing street cred.


Darknet guerilla marketing. Definitely seems to have worked.

Now we need the SEO content side: "How we hit Google with 398M RPS".

"... you can do this manually, but our product makes it as easy as a sign up and API call. Talk to us about pricing. [Python API example].


Yes, but they are clearly going to fail to bring down Google.


It doesn't matter if you fail. The cred comes from how much bandwidth and resources you can soak up.


Well - they clearly were successful enough to get a thread on hacker news……


Right, so clearly the ability to bring down Google is not the point.


Aim high and go out with a bang choom


Nah it's even better because they're considered capable defenders so it's harder.

What I'm not sure of is why Google published this. I can't figure out what their strategy is here. We never published about the attacks we absorbed because we didn't want them to know our capabilities.

Unless this is marketing for Google Cloud?


This is certainly marketing. If they sell DDOS protection, then announcing that they stopped the largest attack ever is an ad.


Sounds like a symbiotic relationship to me. The attackers get to advertise their capability for pulling off attacks, and Google gets to advertise their ability to stop them.


Almost all (but not all) of these attacks are based on some kind of problem that leads to amplification. Advertizing that people should fix these points of exploit help everyone on the internet.


False flag? :)


Maybe Google is responsible for the attack, to be able to publish this blog post! <\tinfoil-hat>


If Google truly went rouge, they could turn all those Chrome installs and Android devices into one gargantuan botnet.


Google is already partly rouge, at least in their logo.


What capabilities did this post reveal the existence of? Not many, beyond it having been mitigated somehow and that it didn't cause an outage. The attackers knew that already, because they'd obviously be able to observe the system during the attack.

As for why to write about it, it's a new type of attack that resulted in almost an order of magnitude increase in attack size. That's interesting and newsworthy by itself, and publishing a concrete number gives people an idea of the size of the problem and the trendlines.

This is also something that needed a CVE, so it was going to be very public anyway. If nothing is written about it, at a minimum Cloud customers will be flooding their support reps with questions about whether the vulnerability applies to them.


> why Google published this

Besides publicity, there is also link to a list of advisories that may be of interest to other cloud operators and users.

https://nvd.nist.gov/vuln/detail/CVE-2023-44487


> Unless this is marketing for Google Cloud?

If you read the article, there are plenty of marketing remarks in there to get you to use Google Cloud


CDN is the ultimate solution for DDoS, so any report about DDoS finally become an ad for CDN


> Unless this is marketing for Google Cloud?

That seems likely here if they're claiming this is the largest DDOS ever.


> We never published...

We? Netflix or Reddit? I know for a fact that Amazon doesn't.


Nowhere that I've ever worked published about attacks. We didn't want to validate the attackers.

At eBay/PayPal we filed patents on our DDOS shield, since it was as far as we knew the first one to exist, but that was about the only public information on it.

At reddit and Netflix we didn't actually have to deal with it because AWS just absorbed (or mitigated) it before it ever hit us. We only had to deal with L7 attacks, which we had shields in place for.


They're not attacking Google, per se. Just the Google Cloud platform that hosts govt sites, Discord channels, gaming servers, etc.


>> Most of the people who do this have a lot of technical skill but not a lot of opportunity to get paid for it based on where they live or the circumstances of their upbringing.

LOL. No there are plenty of legitimate enterprises as well as opportunity to immigrate. Especially in tech. These guys are just criminals.


Have you tried that yourself? Especially as someone who has the skills but doesn't speak the language.

I know people who can't relocate because of communication issues and/or cultural differences.

No they aren't criminals, but they are definitely underpaid compared to those who managed to relocate.


If they operate botnets, I think it's fair to call them criminals


probably, or if they aren’t directly criminals they’re probably facilitating criminals. but if you were very particular about it you could in theory set up in a country which doesn’t have treaties with any of the countries in which the victims operate.


It is only criminal if the botnets are used to steal something. DDoS-in just for fun is at most an annoyance.


Yeah, hospitals can't stand that kind of thing...


They should have better IT.

Blaming it on the people that knock them off will not make improve the situation.


So your logic is: condemning people for criminal behavior is not useful because it de-incentivizes their victims from being vigilant against that criminal behavior. Am I getting that right?


No. My logic is that when it comes to computers, it's a battle of the brains: who is smarter than the other one. And the result is usually a novel way of attacking someone, and then finding novel ways to defend and on and on. So that's why I strongly believe in hiring smarter people to defend your castle, rather than swatting people smarter than you because you're not as smart as the other side.


Not to condone the DDoS activities in the least, but that's just ignorance. Which prosperous country accepts evrn remotely as many legal immigrants as apply / would want to move there? And a lot of people / political parties are constantly lobbying for less immigration :-/


PR. Attack Google or cloudflare. Wait for them to publish a blog post about the biggest attack ever seen, then tell potential customers of your botnet that you can launch a bigger attack than anyone else and point to the above blog post.


And Google and Cloudflare also get good PR because how insanely good their are at deflecting those huge attacks. It's a win-win situation here... oh wait /s

(the /s is just on the "oh wait" part, not the whole post)


The botnet is probably the critical thing. Even if the PR (or "avenge the global south", or whatever) value might not be enormous, the cost to a bad actor of having other peoples' computers do something is almost negligible.


Doesn't using your botnet expose your botnet IP addresses/devices?


Yes, but currently that has zero consequences. Say you infect 500.000 Windows XP machines or consumer routers, the owners of those devices isn't going to be informed, nor is their ISPs. In many cases the manufacturer of those devices also aren't going to provide security update, but those probably wasn't going to be applied anyway.


Are you positive that "tell nobody" is the mitigation strategy that Google used here? They could have easily asked router vendors to patch their devices, asked ISPs to blackhole those customers until they're patched, etc.


Patch what though? They know that they're getting hit with unprecedented traffic, not how those computers were infected.


It's mostly not infected computers, but rather poorly configured proxies that are open for anyone to bounce malicious traffic through. Convincing everyone to clean up their open proxies is a long-term, hard problem. But I plan to tackle it soon....


How? I suppose the most effective way is to have those proxies attack each other. But don’t, it’s likely illegal.


Get a few companies to agree that open proxies are a scourge that needs to be stopped. They each apply some action to open proxies (user-facing messaging, loss of functionality, captcha, or complete block), and the users of those proxies will get the problem fixed.

The hard part (and it truly is hard!) is convincing a few companies to do this. It risks user complaints in the short term, to solve a problem that may not be very acute for the largest companies (who can simply absorb these attacks).


How about downgrading all connections from said proxies to http 1.1? This can be done in coordination, but it ought not to be too hard to embed such ‘graylisting’ functionality in a webserver.

(No I don’t expect any response but I am just leaving this thought for those who stumble on this thread in the future).


the most efficient way would be to write a script that gains root on those open proxies and then fixes the issue.


Effective or efficient? Would seem rather inefficient to spend time researching all the possible ways to gain route on x number of servers, finding an exploit, crafting some plan to execute it, keeping your prints clean etc etc


What way would be more efficient?


So you're saying Google and Cloudflare, just as an example, should block consumers of other ISPs because they run "unpatched" software or they have malware running on their devices? Lol, this is a very absurd and narrow minded view how the internet works. You deal with the traffic, you don't randomly block eyeball networks because they're attacking you.


> you don't randomly block eyeball networks because they're attacking you.

ISPs do this literally all the time. They sell services that do this.


Google should start using their ad network to silently update people’s security!


Uh, no thanks from this user.

Also, sounds illegal.


Definitely illegal in the US.


> the owners of those devices isn't going to be informed, nor is their ISPs

not necessarily true


But these ISPs that give something and inform and even isolate their infected customers are few and far between.

Shout out to Dutch ISP XS4ALL who was (is?) very very strict and active in this space.


Anyone can claim that, there's no link to a specific actor


I'm guessing you would do this in advance - "pay attention to tech news next week - our botnet will unleash hell"


Step 1: Put message on blockchain beforehand with exact date/time and characteristics of DDoS

Step 2: Execute DDoS

Step 3: Prove to others you are responsible by using private key


There’s (as ever) no need for a blockchain, people do this with twitter and sha256 all the time. Hash the message, post the hash, wait for prediction to happen, post the full message.


Any examples of major predictions verified in this way?


Why not attack a target that can actually be harmed? Are they afraid?

It's not obvious what's the value of having the largest ineffective attack.


Why not roll a couple defenseless grannies in the streets for pocket change, rather than throw rocks at the cops and then get away unscathed?

One gets you more money in the short term. The other one gets you more street cred - which gets you more money in the long term.


Well in this case it seems like they blew their "0-day" and Google worked with other providers to patch this type of attack.


If they're unsophisticated, it's for clout and "street cred" in hacking communities, no different than tagging a freeway overpass with graffiti.

If they're advanced, they are doing it to test capabilities and responses. The Taliban used to pay kids to light off firecrackers outside base to check defensive TTPs. It also had the effect of desensitizing the sound of gunfire.

Really good adversaries know how to accomplish the latter while appearing as the former.


Or... It's for fun. Are the 90s/00s that long ago? Not everything is about money and terrorism.

Bring back CotDC, MoD, even lulzsec. I miss the days of the internet being the open seas and everyone having their own fun on it, user beware.


You can (or could, my information is old) pay botnet owners a few hundred bucks to disrupt the servers of people you don't like. An example would be ruining a match for a competing game clan. There's a suprising amount of this kind of petty bullshit going on in the world.

With the Mirai botnet, some of the creators had a DDOS mitigation company as well: they'd sell one party the weapon, and sell another party the defense against that weapon.

Sometimes it's for the street cred, or the lulz, or just the challenge of building a botnet.


Disruption is part of it for sure, but often big, aggressive DDoS come hand-in-hand with other attacks.

Seen it happen with big DDoS on clients. Furaffinity, one of the larger furry webistes, and a constant drama magnet, was a client at a former job. They got DDoS'd hard, and in between scripted DDoS hits they slammed the hell out of their web applications to get vulns and do credential stuffing.

As in blast em, lighten it up just enough to get a ssh or nmap through a few times, blast em again, and repeat until they got in.

Is also why you want out-of-band solution that doesn't touch your infra much.


> in between scripted DDoS hits they slammed the hell out of their web applications

I've heard this before, but I still don't understand what purpose the DDoS serves here. Distraction, so the actual attack drowns in the noise?


For a recent and similar attack at scale, the authors of the botnet software were from an American security company who sold DDOS mitigation solutions (https://en.wikipedia.org/wiki/Mirai_(malware)).


Google keeps showing me a CAPTCHA and telling me I'm part of the Mirai botnet. I'm using Cloudflare WARP so I have an IP from Cloudflare.


You don't need a lot of money or resources to pull one of these. Code is on Github: https://github.com/649/Memcrashed-DDoS-Exploit

Also the participants are sometimes innocently recruited victims for the attack. I blame app insecure defaults.

The trend since 2015 is to get worst as you will see in the bottom layer of this graph: https://www.digitalattackmap.com/


This is a novel ddos attack. Did you all even read the article?


I did. I replied to OP question.And that was about DDos attacks in general not "HTTP/2 Rapid Reset attacks"

> Who has an incentive to carry out these DDos attacks?

Did you read the comment I was replying to?


Yes, “these attacks” referring to the sophisticated novel attacks under discussion in the article. No need to be defensive, just read it next time.


Are there DDoS attacks that are not sophisticated in form or execution? :-)


ping


For example a certain group decides to short on a share. The DDOS the company and "leak" it to the press. The bad press negatively impacts the share price. At least this was the way some time ago when I had to deal with such attacks.


I mean you can ask them https://t.me/s/noname05716eng chat of NoName group skids doing some DDOSing, they pool their bots by community of like-minded people (Russians supporting current government, murder rape etc. you know)


I've never contemplated the cost of a DDos attack, I guess there's the upfront setup costs to secure the software and hardware that will execute the attack but are you speaking more about the costs on the day of the attack? Are those costs trivial, like the marginal costs I suppose it would be?


To be effective, you need to either be prepared to hide behind google, Cloudflare or AWS, OR you need some pretty expensive deal with you (large) ISP who can (quickly) filter on their edge.

Sitting at the end of whatever network, you will not be able to do anything against a sufficient volume attack.


Google?

If you analyze the situation from the perspective of "Who benefits from it?", then the answer is clearly: Google benefits from it (they are so good, they can mitigate gigantic DDoS attacks). So, I don't think it's that crazy to think this is all a publicity stunt .


I was thinking the same thing. Especially when the the bottom of the article is basically a sales pitch for their own services. Whilst what they are saying is true and correct even if they aren't behind it (which I really doubt that they are the culprit).

"With or without patches, organizations would need to make significant infrastructure investments to keep services running in the face of attacks of any moderate size and larger. Instead of bearing that expense themselves, organizations running services on Google Cloud can take advantage of our investment in capacity at global scale in our Cross-Cloud Network to deliver and protect their applications."


I'm just guessing here but it could easily be stock market manipulation


Why do you think Finland or Spain might attack USA companies?


I don’t think either of those countries would attack US companies. Obviously I would suspect adversaries instead of allies


The USA sees all countries, even allies, as adversaries, and always has ... hence the deep paranoia the USA has, lack of trust, lies, coercion, and spying (the list is longer but time is limited).


Okay. In either case I doubt Spain or Finland is DDosing Google


few hours of inconvenience in corporate cloud infrastructure has a result also - large amounts of money lost. So most obvious incentive is simply ransom


DDOS protection is a billion dollar a year industry.


for the lolz


My gut instinct is that this is a nation-state initiated.


If I had the technical prowess to do this, I would do it just for the fun of it. I mean, why not? Anarchy is fun.

I'm pretty sure someone will find a way to take down GOOG/AWS/Azure/etc through a DDoS so large nothing will work for anyone.


At a previous company, we were subject to semi-frequent attacks (of a much smaller scale). The operating assumption internally was that it’s a competitor trying to undermine us but it remains a mystery.

Anyone involved in these type of attacks (at internet-infrastructure scale or targeting specific companies) brave/crazy enough to create a throwaway account and tell hn about the motivations?


We had a similar issue and assumed it was script kiddies having fun. Turns out someone (raises hand) wrote a really bad microservice who's inefficient queries sometimes triggered all our alerts.


We had that except it was our own frontend developers.

We also had some actual attacks so we made a system that detect anomalies (like more than 50rps per IP) and raises alert.

...which was thwarted by frontend developers again, as they loaded few hundred tiny icons at once that triggered that alert routinely, and only thru http2 multiplexing their idiotic design patents haven't bitten them before.


Reminds me of a support ticket I had to investigate recently.

Performance metrics were taking a dive and triggered automated alerts - average response time jumped from 200ms to 2000, 8000, and eventually began approaching 15000, at which point requests were timing out all over the place. At first I was wondering if my recently-deployed MR was responsible, but upon further investigation, it was from one developer on another team doing two very dumb things with their application prototype:

- Constantly retrying the same query with no filtering and setting the maximum allowed page size

- Sending massive quantities of small queries for specific individuals (50-100 per second) which quickly hit the rate limit, and immediately resuming after the rate limit expired

Some developer outreach was required...


Hehehe, entirely unsurprising :)


A local hosting company ddosed local bussineses that had IT infrastructure and then advertised their hosting solution with ddos protection.


The universities in Sweden were attacked by "Turkey" after the big quran-burning scandal. They had some twitter account bragging about it. Was pretty evident it was Russia.


I've heard stories about attacks where the target is a subsystem but in order to avoid drawing attention to it they attack the entire network.


I had a customer that was getting DDoSed by competitors, the competitors likely didn't know they were doing a DDoS, as they were aggressively scraping product listings but just doing so without any delay/rate limiting and it effectively DDoSed them. They weren't trying to make the target site slower, but were trying to get data at a rate that made the target's servers uneconomical to their actual paying customers.

This kind of attack is nothing like the actual DDoS attacks, but it's a lot more common in my experience, but also relatively easy to mitigate with something like Cloudflare or Akamai (which is what I'd recommend to my customers).


protection rackets by companies you'd only find on places like lowendtalk


Sure, I’ll spill the beans. Some people think it’s related to Gaza or Ukraine but it’s not. We just really don’t like Google, we are trying to shut it down so we can bring back Altavista.


Made me wonder - if Google wasn't there and Altavista was the incumbent, would it be any different, or was the enshittification of search inevitable?


>... or was the enshittification of search inevitable?

My bet is on the latter. Enshitification is a direct product of greed. No crafts person or creator I know of goes into something they enjoy creating with the intent to make it this monstrosity of money extraction. Most creators have a drive for their creation to be shared and experienced by many.

Yes, you may want to get a reward in the process and for some creators, their motives may change over time if they see an opportunity to turn their creation into a wealth machine for themselves so they can do whatever after.

Enshitification I believe is a secondary effect of something that becomes successful for the owners of something and either they or others change motives towards value. Optimization is no longer about the creation, sharing, experience, humanitarian, whatever motive and shifts to money. The second that becomes the goal, enshitication is just part of the optimization journey. In my line of thinking, it's the same reason monopolies or near monopolies tend to form, these are merely further states along optimization strategies in the monetary/wealth extraction goal.

Part of that process is that when something starts to succeed, it attracts people with these goals so the goal of something shifts pretty rapidly.


This is beautifully written, thank you. As a creator, my money ambitions fit between “does this thing provide enough value to pay for itself” and “could this provide enough value to pay for my needs”.

The drive to create the thing comes from sincere enthusiasm + excitement that simply don’t leave me alone.

Shifting the thing’s purpose from the original one that filled me with enthusiasm to the purpose of simply extracting money, corrupts its original purpose.

I call this corruption.

it is my observation that one kind of people create, and another kind of people corrupt.

Unfortunately the economic system we have accepted as normal, leans toward corruption and extraction.


> if Google wasn't there and Altavista was the incumbent, would it be any different, or was the enshittification of search inevitable?

You might not remember this, but before Google, paid search placement was par for the course. One of Google's innovations, one of the things that really endeared it to users was clearly labeling their ads.

So, yes - it was inevitable. And, in fact, Google probably staved it off at least a decade; maybe more.


Was at Tokyo Disneyland today and taught my girlfriend the word “enshittification”. (i.e. making your customers pay via your stupid app to do literally anything in your park, and not even providing wi-fi.)


That's not enshittification, squeezing money out of you is just how theme parks operate. The term can't really apply to Disney parks at all because there's no two sided market.


You missed the no WiFi part. At least enable customers to send their money!


That’s shitty, but if that’s all enshittification means then it’s ten extra letters for nothing. Disneyland is not a “platform”, it doesn’t go through the enshittification process.


Arguably everything driven by capitalism is geared towards enshitification. Maximum extraction for minimal effort is a hard paradigm to beat


I don’t remember paying for anything at Disney Sea with the app except for a few fast passes (and used to schedule the free fast passes of course). Suica card and credit card worked for everything else.


So you taught her wrong, that's not what it means...


That's double-dipping?


[flagged]



Before Google, new search engines became crappy after 6-12 months, maybe two years tops.

It's not surprising Google search is now crap, it's what happened to all the old search engines. It's only surprising that it took 15-20 years (depending on perspective), and in the mean time, they've developed a big ecosystem of other stuff.


I'd say enshittification is inevitable. It isn't a technology issue, it's human issue. Imagination and desire are what brought us this far and also what holds us back. See also: the tragedy of the commons, the prisoner's dilemma, the trolley problem, etc.


There is a reason Google’s first office was right next door to DEC WRL and Alta Vista. There is so much cross contamination between the two that it’s impossible to say.


As someone old enough to remember: one of the main reasons Google won was that the other engines (shedding here a tear for Lycos) simply couldnt handle the increasing amount of web spam. They were built in a trusted web environment, but suddenly things became cheap enough for less scrupulous people to start creating effectively spam sites, and the engines somehow didn't manage to react in time.


Altavista started turning to shit as soon as it was no longer an Alpha demo. That was a major reason Google took off so quickly.


I miss boolean search operators.


Sounds like something those dogpile folks would do.


I think the plan went horribly wrong, everybody started using Bing again!


Bing aka DDG as if...



The technical article (linked in the post) has more interesting details: https://blog.cloudflare.com/technical-breakdown-http2-rapid-...


This should be the top comment.

TL;DR: HTTP/2 is internally concurrent, can handle multiple streams. It is possible in HTTP/2 to send a nasty request that looks like so:

  - GET x1
  - GET x2
  - GET x3
  - ...
  - GET x100
  - Actually, cancel all of the above (uses multiple RST_STREAM frames)
  - GET x101
  - GET x102
  - (...)
  - GET x200
  - Actually, cancel all of the above (uses multiple RST_STREAM frames)
  - (...)
This can be repeated a lot of times. The problem is that the endpoint, which typically is a reverse proxy, might start dispatching the requests before it reads about their cancellation. And sure it will cancel them, but by the time of cancellation it will already have resulted in some resource usage downstream. Such requests are accepted because at no point the client has opened more than 100 streams, which is the typical concurrency limit. The example from the blog manages to squeze in a single packet 1000 GETs (i.e. 1000 HEADERS) correctly interleaved with RST_STREAM.

Maybe it's just me, but it's always fun to see such creative and simple abuses of protocols/code.


That’s pretty fascinating. This is a naive solution, but couldn’t the protocol have supported limits of requests per packet? I get that it is antithetical, but for most sites, this type of request pattern seems highly unusual.


If this is true than the design is problematic. What makes it even worse is that cancellation of requests typically does not work in cloud environments. It is a bit laughable that Azure for instance recommend the use of cancellation tokens but in reality you never get them for web requests.


Look at F5's entry regarding this CVE. They specifically mention they have set a safer limit because they expected this to be an attack vector, haha


> We noticed these attacks at the same time two other major industry players — Google and AWS — were seeing the same.

Curious if there's anyone in the HN crowd that works at this level in one of the major vendors. What happens during an attack of this scale? Are there people from Cloudflare + Google + AWS on a live videoconference call co-ordinating with each other in real-time to mitigate it? Or is each vendor mostly observing from a distance what is happening elsewhere, and solely focussed on sorting their own problems out?


We typically fight our own fires, but if one of us sees something interesting/new we often ask others (after the fire is out) if anyone else saw a similar attack (which could be a new botnet, a new attack method, or whatever). In this case we realized we were all looking at the same thing (which could have huge impact on smaller targets), so collaborated on understanding the problem and coordinated the security response with all webserver vendors.


How does DDoS mitigation work? When people say "I put my website behind Cloudflare to mitigate DDoSes", what does it mean exactly?

Is it only about having a large enough ingress pipe that you can weather however many Gb/s you are being bombarded with, and still having some spare capacity for legitimate traffic?


It is about that and a lot of other things, but it usually involves being able to dynamically scale up your bandwidth and compute power to cope with the incoming flood.

A lot of DDoS traffic isn't actual HTTP traffic, it can be garbage targetted at your IP address to "fill the pipes" (bigger pipes help, as well as having multiple server geographically distributed). Some can be TCP SYN flood, to just open TCP connections and exhaust available ports. Etc. Oftentimes, multiple simple reverse proxies can handle these malformed requests in front of your server.

Then, for the most sophisticated queries that send seemingly-legitimate HTTP traffic, one has to handle them... It could be serving requests from a cache, adding captchas to slow attackers and identify legitimate traffic, enforcing rate limits, etc. Usually, you'd like to be able to tell if a request is legitimate or not before forwarding it to the actual server, and you can deploy all sorts of tools to do so.


> it usually involves being able to dynamically scale up your bandwidth and compute power to cope with the incoming flood.

I don't think this is right. If you have a meaningful amount of bandwidth, dynamically scaling it is getting a connection upgraded in weeks instead of months. If you don't have a meaningful amount of bandwidth, you're rely on your provider(s) to have enough bandwidth and again, they can't expand quickly.

> Some can be TCP SYN flood, to just open TCP connections and exhaust available ports.

If you have a tcp stack from maybe 2003 or later (so excluding macos, unless they changed something in the past four years), it will have synflood protection, with syncookies. In the event of a heavy synflood, your system will send at most one syn+ack per incoming syn, and actually accept connections on the incoming ack. Yes, you miss out on detailed tcp options, but it's not that big of a deal, unless the volume impacts your available bandwidth.

Also, as a tcp server, you can't meaningfuly run out of ports; your one listen ip:port can connect to all ip:ports, if you have the memory for it. You'll probably run out of total accepted sockets, but there's no real resource limit on partially accepted connections, because of syncookies. It can be much more draining when DDoS clients actually hold connections. But it's often simply about volumetrics, and it's easier to generate a high volume of SYN packets than to hold a connection.


Cloudflare, and other companies, can detect that requests are DDoS and either drop, throttle, or verify the traffic, instead of forwarding it your server.

You configure your server to drop all traffic which wasn't set by cloudflare, which is efficient.


Back when I was in Google SRE, people would joke that "we just send DDoS traffic to Australia".

In general, Google's internal cross-DC traffic is so much larger than anything anyone could DDoS them with that they can always find a way to deal with it.


When the ddos attack is volumetric, the only way to mitigate it is to have a fat enough network to handle the traffic while you work with ISPs to start blocking the traffic upstream.

Not all ddos attacks are based on volume though, some are exploiting native features of a protocol, like the slow loris attack

https://www.cloudflare.com/learning/ddos/ddos-attack-tools/s...


that's not the only way.

The way we used to do it is have "filter boxes" with a real anycast IP address's which reverse connect to your origin.

This helps a lot because it keeps a lot of traffic localised instead of allowing it to collect in one place. Anycast should also mean you have a failover mechanism; but if it fails then you're only down in one section of the world where the most bots are anyway, which is usually not as bad as being down globally.


Often it means automatically recognizing DDoS requests and handling them in a way that is less costly, without impacting legitimate users.

In this case, it might mean recognizing when a client rapidly resets streams, and either moving that traffic to a slow lane or filtering it entirely.


a) Big pipes

b) ability to filter the noise from real traffic as far as possible (i.e. there is little point in taking in a big pipe of DDoS traffic and then just proxying it to the thinner pipe to the real backend - but if you can identify bad traffic you can drop it and not pass it through).

c) being a CDN helps as a side-effect (what the CDN serves doesn't load the backend services, what can be served from the CDN works for users even if the backend is slow or down)


I always believed that they have some secret mega routers with massive computation limits that allows smart and complex tcp/udp packages filtering.


They do have special equipment at the edges like: https://www.netscout.com/arbor


You use algorithms like token bucket.


CDN/SDN.


Linked in this article is more info on the rapid reset feature of HTTP2 which was used at part of the ddos https://cloud.google.com/blog/products/identity-security/how...


No word on the origin of these attacks? This must require massive amounts of hardware, you’d imagine it to be easily traceable unless some kind of botnet.


That's the particularly bad news, this attack does NOT require a really huge botnet.

https://blog.cloudflare.com/zero-day-rapid-reset-http2-recor...

"Furthermore, one crucial thing to note about the record-breaking attack is that it involved a modestly-sized botnet, consisting of roughly 20,000 machines"


20000 being modest really says a lot about the state of security on the Internet.


There are 5 billion people on the internet. This is 0.0004%. Even 2 million is only 0.04%.

(this assumes that 1 person = 1 device; some people share devices, most people have more than one, e.g. I have a laptop and a router, many people also have a phone, a work laptop, and whatnot – the average is probably >1, maybe even >2)


> this assumes that 1 person = 1 device; some people share devices, most people have more than one, e.g. I have a laptop and a router, many people also have a phone, a work laptop, and whatnot – the average is probably >1, maybe even >2

And lots of devices are not even that, Internet of Shit garbage is well known for being botnet-central, servers running insecure services are good fodder (it's very common for vulnerabilities to be used not to exploit the machine itself, but to install C&C and leverage the machine into other attacks), ...


Distribute just one warez game with your malware embedded and you'll have well over 20,000 hosts under your control.


Is there any major popular account that distributes cracked games that has been found to do such thing? I have seen some popular accounts that create their own installers ("repacks") and the installation takes a suspiciously long time and a huge amount of RAM while is installing.


Or does it say more about the sheer number of devices connected to the internet?


Well, 20000 to hit 201 million requests per second and give Cloudflare problems. You wouldn't need that to make problems for many sites.


*the size of


One could imagine that, given the size, it could be politically or legally sensitive to announce the origin.


Cloudflare explicitly says it's an unknown threat actor: https://blog.cloudflare.com/zero-day-rapid-reset-http2-recor...


Looking at the scale of those that's what I figured too, but one of the previous largest ones (mirai) was targeting a minecraft server (...). Krebs has some interesting write ups on Mirai.


The silence is actually already giving it away then, one of four options.


Enumerate, please.


I assume China, Iran, North-Korea or Russia (in alphabetical order).


US enemy one, two, three and four (whoever is trendy to blame right now)


The immediate assumption is that Iran is doing it. They have done it many times before and they are allied with Hamas. I haven't seen any proof but it's a safe bet.


A novel attack like this done at small scale like this is probably just a script kiddie experimenting.

An actual nation state would have tested it fully internally and started on the public internet at a scale bigger than 20,000 machines.


This happened in late August and early September.


Couldn't cloudflare show a page to the next handful of http requests from an IP informing the user that "something on your network is participating in DDoS attacks".

All the big providers could do this, just inject a little turnstile like page in front of the next cloudflare site you visit.

I would love to know if there's a compromised device on my network, and I don't have any real monitoring set up to detect it.

It's not a full solution, but at least informing users there is a problem is a good start.


> All the big providers could do this, just inject a little turnstile like page in front of the next cloudflare site you visit.

Oh good. We can go back to the pre-HTTPS days where ISPs injected ads into HTML. Except this time we normalise it for the CDN provider.


In the age of CGNAT? Not a good idea.



The fact that large cloud providers can handle huge DDoS attacks I think in the long run leads to a worse internet. It forces botnets to up their game and for websites the only solutions available are to pay Google, Amazon or Cloudflare a protection tax.

I honestly don't see any other options, but I'd really wish for them to come through some community coordinated list of botnet infected IPs or something.


What?

Let's go back to username and password. 2FA forces scammers to up their game.

What about password managers? Having separate passwords to every account makes hacking into your accounts much harder and might hurt everyone in the long run.

And don't get me started on end to end encryption. Privacy, long term, will mean the fall of civilization.

Sarcasm aside. I think I understand your point in which we shouldn't just delegate to cloud providers the whole effort in preventing attacks, but just with everything production-grade, the average enterprise just isn't ready to deal with all the upfront cost to run your entire computing solution. Because it doesn't end with this type of mitigation and dependency. A similar argument could be made for not using proprietary chip designs made by cloud providers. Or any proprietary API solution for that matter. It really is a matter of convenience that a community solution might cover in the future, abstracting away fundamental building blocks every cloud provider must have (name resolution, network, storage and computing services) to provide such higher level functions without lock in. We are just not there yet.


> just with everything production-grade, the average enterprise just isn't ready to deal with all the upfront cost to run your entire computing solution

That’s not a fair point.

We’re not even trying to make the internet safe. There is zero (0) actions being taken to stop this madness. If you run a large website, you still regularly see attacks from routers compromised 3, 4, 5 years ago. Or how a mere few days of poking around smartly is still enough to this day to find enough open DNS resolvers to launch >500Gbps attacks with one or two computers.

Why are these threats allowed to still exist?

The only ones attempting something are governments shutting down booters (DDoS-as-a-service platforms). But that’s treating symptoms, not causes.

We will eventually need to do something, or it will be impossible to run a website that can’t be kicked down for free by the next bored skid.

Just like paying protection fees to the mafia was a status quo, this also is just that. A status quo, not an inevitability.

The solution is to finally hold accountable attack origins (ISPs, mostly), so that monitoring their egress becomes something they have an incentive to do.


I don't think it's true that 0 actions are being taken. When new vectors for amplification attacks are found, they get patched - you can't do NTP amplification attacks on modern NTP servers anymore, for example. But it takes a long time for the entire world to upgrade and just a handful of open vulnerable servers to launch attacks. And in the meantime people are always looking for new amplification vectors.

> The solution is to finally hold accountable attack origins (ISPs, mostly), so that monitoring their egress becomes something they have an incentive to do.

Be careful what you wish for. The sort of centralized C&C infrastructure and "list of bad actors everybody has to de-peer" that you would need to this effectively would we a wonderful juicy target for governments to go, "hey, add [this site we don't like] to the list, or go to prison".


> "hey, add [this site we don't like] to the list, or go to prison".

Aren't there already a dozen or so such lists? I don't see how one more list really increases the risk.

You can make the list public - most of the bad actors are obsolete, compromised equipment for which the owner is unaware of the problem. Once the list is public, it's pretty easy to detect anyone trying to abuse the list as a tool of censorship.


IP reputation is already a thing. And plenty enough ASNs are well-known for willfully hosting C2 servers and spam, DoS, etc sources…


Traditionally, a botnet can be compromised (at least largely) of actual consumer devices unknowingly making requests on their owners' behalf. This can cover hundreds of unrelated ISPs as the "origin" and is effectively indistinguishable from organic traffic to a popular destination. "Accountability" is not simple here.


> Traditionally, a botnet can be compromised (at least largely) of actual consumer devices unknowingly making requests on their owners' behalf.

And I do count that in.

Just because a user is the source of an attack unknowingly doesn’t make it right.

What would make it right is for there to be a more generalized remote blackholing system in place.

ie my site runs on an IP, is able to tell my ISP to reject traffic to it from $sources, and my ISP can send that request to the source ISP.

And if it makes my site unavailable to that other ISP because of CGNAT and 0 oversight, tough luck. Guess their support is getting calls so maybe they start monitoring obviously abusive egress spikes per-destination.


I like the irony of saying there are zero actions being taken in response to a blog post documenting actions taken to specific CVEs.


These blogposts document the attack. Documenting it and acting in it are different.

There’s no practical action being taken besides « use our profucts cause we can tank it for you » here.

The mitigations listed are better than nothing, but the fact that every skid out there can hire a botnet of a few thousands compromised machines (like here) and send you a few millions (say this protocol attack allowed a 100x higer than avg impact) rps is way enough to kill the infra of 99.99% websites. No questions asked.


>There is zero (0) actions being taken to stop this madness. If you run a large website, you still regularly see attacks from routers compromised 3, 4, 5 years ago

Yes, you're 100% correct. Back in the day when the main bot net activity was spam if you were infected and you started sending TB of spam the ISP would first block your outgoing smtp. If they kept getting complaints in a week or two they'd cut you off.

I remember 30 years ago when most people were on dialup, I was fortunate enough to have 128kB SDSL. As a relatively clueless kid I decided to portscan an IP range belonging to a mobile service company. Few days later my dad got a phone call saying their IDS flagged it and "don't do it or we'll cancel your service". For a port scan of few public IPs no less!

ISPs could definitely put a stop to 99% of these botnets, but until they see some ROI, why would they bother?


But that's exactly the problem, it shouldn't require a enterprise grade tool just to host a simple website on the internet. We've lost something due to our inability to stop attacks at the source and heavy overreliance on massive cloud providers to do it for us.

2FA and password managers didn't make us heavily reliant on massive companies.


Yes, but if these cloud providers didn't exist eventually there'd be botnets that no site could protect against, rather than the status quo of at least some sites being able to resist them. The idea that the existence of cloud providers that can soak up a lot of traffic is making things worse by causing botnets to get more powerful just seems silly.


you don't need enterprise grade tools just to host a simple website. however, if your simple site ever gains enough attraction to come under an attack, especially like this, you'll never survive. you can either just accept that your service will not survive the attack and just shut it down until the attackers realize mission accomplished and stops. you can then hope they don't notice when you bring it back. no simple site will be able to afford what's required to stay up from these attacks.

i'm not saying i like having to put the majority behind the services of 2 or 3 companies, but if you ever get shut down from some DDOS, you'll understand why people think they need to.


It won’t survive - until a day or so later you’ve migrated to one of the large providers who provide the protection.


A similar analogy can be made with the likes of westward expansion in the continental US.

Back then, you got a piece of land, and really could do what you wanted with it. Build a business, farm, etc. some government taxes but nothing crazy. But you had to deal with criminals, lack of access to medical care, and lack of education.

Now to do the same, you have a slew of building codes, regulations, zoning laws, and are basically forced to have municipal services. Higher Taxes to pay the roads, police force, fire fighters, education services etc.

However, home owners can still just have an egg or vegetable stand at the end of their driveway. It won’t be the same as having a storefront in town, but it’s still doable without the overhead.

Similarly, as the internet matures, we’re going to see more and more overhead to sustain a “basic” business.

But you can still have a personal blog ran in your closet, for lower-level traffic.

The analogy isn’t perfect, but unfortunately as threat-actor’s budgets increase, so too do their quality/sophistication of their attacks. If it was cheap to defend against some of the more costly attacks, they would find a different vector.

The answer, to me, is some tangential technology that is some mix of federated or decentralization. Not in a crypto bro sense, but just some tech whose fundamental design solves the inherit problem with how our web is built today.

Then threat actors will find another way, rinse and repeat…


> home owners can still just have an egg or vegetable stand at the end of their driveway

No you can't. That is illegal without a "cottage food" license, training, and labeling in most of the US.

https://www.pickyourown.org/CottageFoodLawsByState.htm


Child-run lemonade stands are technically illegal in most states (some have actually carved out exemptions for them because of overzealous policing).

Garage sales often have a specific carve out, also, and limitations on numbers of time per year, etc.

Most areas nobody cares at all until it becomes a nuisance somehow.


Selectively enforced laws are the worst kind of law.


I've always thought it would be interesting to allow as a defense against a violation of a law to prove that the law is regularly violated without consequence.

Because selectively enforced laws are just another way of saying you have a king at some level, the person who decides to enforce or not.


Selective prosecution is a defense under the Equal Protection clause of the Constitution.

However, the Supreme Court has left the prescribed remedy intentionally vague since 1996, which in turn makes the claims themselves less likely to be raised, and less likely to succeed.

https://wlr.law.wisc.edu/wp-content/uploads/sites/1263/2022/...


You have some control over this as an ordinary citizen. Next time you're on a jury for a lemonade stand violation, nullify.


Has a lemonade stand violation ever resulted in a jury trial in the US? I'm skeptical. In places that enforce those rules, usually what happens is that the cops tell the parent it isn't allowed, the kid shuts it down and there's no fine.


Or it turns into a giant PR disaster for the cops.


Don't tell that to the GDPR defenders.


I am a gdpr defender. I would like stricter enforcement.


Okay but does that mean anything regarding the parent commentor's analogy or the article?


20 years ago if a blog or website ended up on slashdot/digg/whatever there was a good chance it was going down. Scalable websites are a commodity today


That goes both ways. What was the price then to get a botnet with 10k nodes making 1k requests / second? What is the price today?


For the website or for the use of the botnet?


For the use of the botnet...


sure, It's no doubt an arms race. The prevalance of websites going down due to scaling issues feels like order of magnitude less than it was 20 years ago though. Purely anecdotal with no real data to back that up.


Because the majority of sites run on/behind:

- AWS

- Cloudflare

- Azure

- GCP

- Great Firewall of China

Maybe there was some truth about "the world market for maybe five computers", after all...


Sure, I fail to see how that invalidates my point


What I am saying is that we are getting "scalable websites" today individually, but it has cost us overall resiliency because most of us all are hiding behind the big providers. I am not so sure if this is a good trade-off.


> 2FA and password managers didn't make us heavily reliant on massive companies.

Retool: https://arstechnica.com/security/2023/09/how-google-authenti...

Lastpass: https://news.ycombinator.com/item?id=34516275


If Google Authenticator goes away, people will still be able to use 2FA (I for one use Aegis, it's available on F-droid and does everything I need, including encrypted backups)

If Lastpass goes away, people will still be able to use keepass or any of the large number of open source password managers, some of them even with browser integrations.

If I have a website that is frequently attacked by botnets and Cloudflare goes away, what can I use to replace it?


I am sorry, but if your password manager goes away and you have no disaster recovery scenario planned you might not be able to just move to a competitor:

https://news.ycombinator.com/item?id=31652650

My response was to illustrate how insidious big companies are.

Of course nothing compares to the backbone of the web going down. If AWS North Virginia suffers widespread downtime to all its availability zones, much of the web will just go dark, no question about it.


2FA, I’m not sure.

But Lastpass doesn’t represent the whole of password managers. Storing your passwords in an online service is a really silly thing to do (for passwords that matter at least). Use something local like keepass.


Hope you plan ahead for a house fire with a 3-2-1 approach for backups. Maintaining an always on off-site storage is expensive unless you resort to cloud solutions like OneDrive or Dropbox, but then you go back to the problem of having your passwords on the cloud, even if encrypted.

Not using cloud is just very expensive and time consuming for the average user.


Passwords are small enough that you can make physical backups easily.


Honest question, because it is interesting and might change how I approach backing up my passwords. How would you go about maintaing that physical copy updated?

What I think would make this approach hard is that you would have to ponder if a newly created account is important at creation time in order to know if you should update the off-site, physical copy of your most important passwords (I say this because if you want to backup everything and avoid the cloud entirely it is just not viable, having to update this physical backup for each new account. I am currently at over 400 logins in my pw manager, 2 years ago it was half as much).

I think having your passwords encrypted with a high enough entropy master password and a quantum-resistant encryption algorithm, and having an off-site, physical backup of your cloud account credentials is enough for anyone not publicly exposed, like a politician or someone extremely wealthy, even though I would be skeptical these people go through such lengths to protect their online accounts.


The lesson is not to "avoid" the cloud, but to not be "dependent" on it. Doubly so if the service provided is one that keeps you locked in and can not be ported over.

So yes, I feel comfortable with my strategy of having backups on bluray disks + S3. If AWS goes down or decides to jack up their prices to something unacceptable, I will take the physical copies and move then to the dozen others S3-compatible alternatives. I am not dependent on AWS.

But I am not interested in using Google Authenticator or Lastpass because that would mean that I am at their mercy.


LastPass is an issue - but even LastPass would let you export/print the passwords. So no hard dependency there*. Google Authenticator recently did something similar with QR codes.

* though OTP seeds don’t print, and you can’t export/print attachments. I don’t recommend LastPass for these and many other reasons.


With two usb sticks it’s not that much work to take one witha fresh backup to my mom when I visit and take the other one back and update that backup. At worst I lose one or two logins.


It doesn’t take enterprise grade tools to host a website.

It does take enterprise grade tools to defend against the largest DDoS ever attempted.

Those are not the same thing. And those DDoS’s often are aimed at things besides a HTTPS endpoint.


There should be a protocol to block traffic on the upstream provider. So if someone from 1.2.3.4 sends lots of traffic at you, you send a special packet to 1.2.3.4 and all upstream providers (including the provider that serves 1.2.0.0/16), that see this packet block traffic from that IP address directed at you. Of course, the packet should allow blocking not only a single address, but a whole network, for example, 1.2.3.4/16.

But ISPs do not want to adopt such protocol.


So I can deny service to your site with a single packet, instead of having to bother with establishing a whole botnet? The current botnet customers would be the first to advocate for this new protocol!


Simple! To prevent it being abused easily you could make it so you would need to send a high number of those packets for a sustained period in order to activate the block.


There is already an RFC we could apply, just implement forced RFC3514 compliance and filter any packets with the evil bit set.

https://datatracker.ietf.org/doc/html/rfc3514


And there could be a short time limit on that block, perhaps one hour, but even 60 seconds would be enough to completely flip the script on a DDoS.


You can only block access to your IP address, so you can ban someone from sending packets to you but not to anyone else. My proposal is well-thought and doesn't require any lists like Spamhaus that have vague policies for inclusion and charge money for removing. My proposal doesn't have any potential for misuse.


Sorry, this is not well-thought and certainly has potential for abuse. This is on IP and not domain? What is the signing authority and cryptography mechanism preventing a spoofed request?


When you send a "reject" packet, the imtermediate routers send back a confirmation code. You must send this code back to them to confirm that "reject" packet comes from your IP address. No cryptography or signing required.


I don't think you understand how networking operates at a packet level.


How can it protect from... botnets, where there are tens of thousands "someones"?


You can only ban packets coming to your IP. Botnet can only ban packets coming to its IP addresses.


It's not very hard to send packets with a fake source IP, especially if you don't care about the reply.


Seems easy enough to require (i.e. regulate) end-customer ISPs to drop any traffic with a source IP that isn't assigned to the modem it's coming from. This would at least prevent spoofing from e.g. compromised residential IoT devices. Are they not already doing that filtering? Is there any legitimate use-case to allow that kind of traffic?


Someone has to go and add the filtering. Nowadays (or maybe since ten years ago) most ISPs have the filter, but not the last 1% (or maybe 0.01%).


The routers can send back a confirmation token to confirm the origin address.


First of all, there is no way this works reliably for anything but the first hop. There is no way for a router to send a packet to you in a way where you can reply to that router unless you are connected directly to it, unless all ISP routers start being assigned public IP addresses. Additionally, there are normally many paths between you and your attacker, and there is no guarantee that packets you send will take the same path as the packets you were receiving. Especially as the routing rules get modified by your successful blocking requests.

That also means that every router now has to maintain a connection table to keep track of all of the pending confirmations, and to periodically check that table for expirations so it can clean it up. Maybe not that bad for a local router, but this is completely unworkable for routers handling larger parts of the internet.

And of course, anyone who has a tap into that level can trivially spoof all of the correct replies so it's still not a secure mechanism.


You can deny access only from your IP, not for anyone else.


IP addresses can be spoofed. So you’d need some kind of handshake to verify you are the owner of that IP. Which is going to be tough to complete if your network is completely saturated from the DDoS in progress.

I do think your idea has merit though. But it’s still a long way from being a well thought-out solution.


How do you verify the source address of the packet is legit?


The router can send back a confirmation code and you must send it back to confirm that request comes from your IP.

Also, on a well-behaved networks that do not allow spoofing IP addresses, this check can be omitted.


> The router can send back a confirmation code and you must send it back to confirm that request comes from your IP.

Ideally with the token packet being larger than the initial packet, so it can easily be abused for a reflection attack... ;-)

> Also, on a well-behaved networks that do not allow spoofing IP addresses, this check can be omitted.

This is already not true for most networks, and in your case would've to be true for all intermediate networks which is just impossible.

In another post you suggest this should also allow blocking entire networks; how do you prevent abuse of that?

Your suggestion is anything but well-thought, it's a pipe dream for a perfect world, but if we'd live in one, we wouldn't have ddos attacks in the first place.


Yeah, we should invent secure communication channels and crypto keys first...


This proposal only works if the packets are readable by every intermediate router. Or are you suggesting that you establish a TLS session with every router between you and the attacker?


What you say already exists, hell, you can use BGP to distribute ACLs

But it costs space in the routing tables and that means replacing routers earlier. It's no wonder, especially if you multiply it by thousand customers.

"block all traffic from outside from this IP" is significantly easier than "block all traffic from outside from this IP to this client". And you need to do it per ISP client, else it is ripe for abuse.

And don't forget a lot of the traffic will come from "cloud" itself.


> What you say already exists, hell, you can use BGP to distribute ACLs

But you should own an AS for that?

> But it costs space in the routing tables

Not implementing my proposal leaves critical infrastructure unprotected from foreign attacks. Make larger routing tables. Also, instead of blocking single IPs one can block /8 or /16 subnets.


Make larger routing tables.

Brilliant! Why didn’t we think of that?!? MOARE TCAMS!!!


if Cloudflare can do this on commodity hardware (stop attacks and block thousands of IPs), then router manufacturers who have custom hardware can do much more.

Also, in Russia for example, there is DPI inspection and recording of all Internet traffic and if it is possible in Russia, then West can probably do 10x more. Simply adding a blacklist on routers seems like an easy task compared to DPI inspection.


This can be made on a paid basis. For example, for $1/month a customer gets a right to insert 1000 records (block up to 1000 networks or IPs) into blacklist on all Tier-1 ISPs. For $100/mo you can withstand an attack from 100 000 IPs which is more than enough and Cloudflare goes bankrupt.


I just imagined this: isp's could make a isp.com?target=yourwebsite.org/fromisp [slow] redirecting url. If you receive unusual amounts of requests from the isp you redirect it though their website.

They can then ignore it until their server melts (which takes care of the problem) or take honorable action if one of their customers is compromised. The S stands for service after all.


It appears you don’t understand DDoS at all. There aren’t humans sitting behind browsers or scripts using browser automation software. No one cares about less respects your “redirect” because no one’s reading your response. Most of the time the attacks aren’t even HTTP, they are just packet floods.


> It appears you don’t understand DDoS at all.

I can confirm this. I see web pages talking about redirecting traffic to scrubbing centers.


> Of course, the packet should allow blocking not only a single address, but a whole network, for example, 1.2.3.4/16.

So, if my neighbour is infected and one of his devices is part of a botnet, I get blocked as well?


Yes. Because blocking several extra users on a bad network that has several infected hosts and does nothing about it is better than being under attack.


Block the whole country, then I guess you’ll see laws passed that IOT providers need to start updating at a better clip.


That already effectively happens in a lot of cases.


If the source field in a packet reliably indicated the source of the packet and a given IP was sending you a lot of unwanted traffic, you'd ask their ISP to turn them off and the problem would be solved. Maybe one day BCP38 will be fully deployed and that will work. I also dream of a day where chargen servers are only a memory. Some newer protocols are designed to limit the potential of reflected responses.

Null routing is available in some situations, but of course it's not very specific: hey upstreams (and maybe their upstreams), drop all packets to my specific IP. My understanding is null routing is often done via BGP, so all the things (nice and not) that come with that.

Asking for deeper packet inspection than looking at the destination is asking for router ASICs to change their programing; it's unlikely to happen. Anyway, the distributed nature of DDoS means you'd need hundreds of thousands of rules, and nobody will be willing to add that.

Null routing is effective, but of course it takes you IP offline. Often real traffic can be encouraged to move faster than attack traffic. Otherwise, the only solution is to have more input bandwidth than the attack and suck it up. Content networks are in a great position here, because they deliver a lot of traffic over symetric connections, they have a lot of spare inbound capacity.


> If the source field in a packet reliably indicated the source of the packet and a given IP was sending you a lot of unwanted traffic, you'd ask their ISP to turn them off and the problem would be solved

No. Your email will go straight into trash because ISP is not interested in doing something for people who don't pay them money. Also, even if they cooperate, it will take too much time.

> Null routing is available

Null routing means complying with criminals' demand (they want the site to become inaccessible).

> it's unlikely to happen

It will very likely happen if there will be a serious attack on Western infrastructure: for example, if there will be no electricity in a large city for several days, of if hospitals across the country won't work or something like this. Then the measures will be taken. Of course, while the victims are small non-critical businesses, nobody will care.

> Otherwise, the only solution is to have more input bandwidth than the attack and suck it up. Content networks are in a great position here, because they deliver a lot of traffic over symetric connections, they have a lot of spare inbound capacity.

So until my proposal is implemented the only solution is to pay protection money to unnecessary middlemen like Cloudflare.


Do you know what the first D in DDoS attack stands for?


I am pretty sure that protocol would be just as abused.


How exactly? You can authenticate sender by sending a special confirmation token back.


How does one get removed from the block list?

Say some IoT device that half of households own gets compromised and turned into a giant botnet. The news gets out and everyone throws away that device. Now they are still blocked over a threat that doesn't exist anymore... doesn't seem like a good situation for anyone.

I'd imagine that the website owners that want the attack stopped will soon want to figure out how to get traffic back since they need users to pay the bills.

Whats to stop someone from just making an app that participates in an attack when connected to public(ish) wifi networks and participating in attacks long enough to get those all shut off from major sites?

How does this stop entire ISPs from getting shut off when the attackers have managed to cycle through all the IP pools used for natting connections? (e.g. the Comcasts of the world that use cg-nat to multiplex very large numbers of people to very small numbers of IPs)?


> How does one get removed from the block list?

We can add an "accept" packet that lifts the ban.

Also, how do you remove yourself from blacklist when banned by Google or Cloudflare? I guess here you use the same method.

> Say some IoT device that half of households own gets compromised and turned into a giant botnet. The news gets out and everyone throws away that device. Now they are still blocked over a threat that doesn't exist anymore... doesn't seem like a good situation for anyone.

Not my problem. Should have thought twice before buying a vulnerable device and helping criminals. As a solution they can buy a new IP address from their ISP.


As much as I half-wish there was something like this, it does sound like email spam blacklists all over again.


Yes, what the OP is saying is related to one of the paradoxes of security/defence, i.e. the fact that the more one increases its defences (like Google is doing) then the more said increase of defences also pushes one's adversary to increase its offence capabilities. Which is to say that Google playing it safer and safer actually causes their potential adversaries to become stronger and stronger.

You can see those paradoxes at play throughout the corporate world and especially when it comes to actual combat/war (to which actual combat/war these DOSes might actually be connected). For example the fact that Israel was relatively successful in implementing its Iron Dome shield only incentivised their adversaries to get hold of even more rockets, so that the sheer number of rockets alone would be able to overwhelm said Iron Dome. That's how Hamas got to firing ~4,000 rockets in one single day recently, that number was out of their league several years ago when Iron Dome was not yet functional.


It's the opposite, the number of rocket was growing and hence the Iron Dome was developed. The Israelis saw the writing on the wall and acted accordingly. The laser system will be operational soon and then it will cost 1$ per shot.


Unless it's cloudy outside.


> Let's go back to username and password. 2FA forces scammers to up their game.

Let's do it. It works for the website you're using right now. 2FA was in large part motivated by limiting bot accounts and getting customers phone number.

I can't imagine how much productivity the economy loses every day due to 2FA.


Is this sarcasm? If not please provide some more details on why you think "2FA was in large part motivated by limiting bot accounts and getting customers phone number". I never used a phone number for 2fa. Mostly TOTP. Bots could do that too. I don't see the connection.

>I can't imagine how much productivity the economy loses every day due to 2FA.

Is it really that much? Every few days I have to enter a 6 digit number I generate on a device I have with me all the time. Writing this comment took me as much time as using 2fa for a handful of services for a month.


> ? Every few days I have to enter a 6 digit number I generate on a device I have with me all the time.

I use more than one service a day, and some infrequently, so for me about every day I have a minute or two where I try to login, need to find my phone (it's not predictable when it will ask), and then type it in. This happens to every person several times a day!

I also now must carry a smart phone with me to participate in society.

But the main drag is that when people lose or break their phones the response is: "just don't do that" and the consequences range from losing your account to calling customer service.

> Mostly TOTP. Bots could do that too. I don't see the connection.

Most people using 2FA do not use TOTP, they use a phone number.

Bots could use TOTP, it's more infrastructure, and it's a proof of work function for them to login.


While I don't take starcraft2wol's theory seriously, there are a bunch of services that have made phone numbers essentially mandatory. They claim this is to "protect your account".

You sign up for a Skype account or Twitter account and decline to give your phone number, instead choosing a different form of 2FA? In my experience your account will be blocked for 'suspicious activity' even if you have literally no activity.


And you still don't take my theory seriously :)


To add, password managers provide great coverage of almost every problem 2FA is. supposed to solve and it improves the workflow your grandma already know (writing passwords on a sheet). The only difference is Google doesn't get any money when you run a script on your own computer.


> It works for the website you're using right now

It doesn't, you can regularly see people getting their accounts stolen here. This wouldn't be possible (or at least this trivial) with any competent implementation of 2fa.


> Privacy, long term, will mean the fall of civilization.

I'm curious about your rationalization for this. Lack of privacy will also mean the fall of civilization. Civilization is just doomed to fail at one point or another. All things come to an end.


This was me being sarcastic. Of course we need privacy, not because we have things to hide, but because individuality can only flourish without constant surveillance.

Yes! All things come to an end and that is why some recent philosophers think that Plato was naive to think it could minimize or erradicate society rotting. This is where negative utilitarianism comes in, where the point of society is not to maximize happiness (and therefore prevent society from collapsing) but to minimize suffering (and therefore provide mechanisms to minimize damages from transitions between organization forms when society collapses). I have to refer you to Karl Popper's The Open Society for this, because needless to say this answer is very reductionist.


Ah I just missed the sarcasm. Yeah, and when the sole goal is to minimize suffering, tyranny is introduced.

"Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty nor safety."


This discussion is somewhat reminiscent of "Don't hex the water"..

https://www.youtube.com/watch?v=Fzhkwyoe5vI


None of your examples are valid, IMO.

Procuring and operating the infrastructure to mitigate this kind of attack costs many many thousands of dollars or requires becoming part of the Cloudflare/AWS/Google hive.

Joe Schmo can set up a TOTP server, run keepass/bitwarden and use letsencrypt for free (or another SSL provider for cheap).

The lament from parent is that running a simple blog reliably shouldn't require being inside Cloudflare's castle walls or building your own castle.

---

My personal observation is that simple websites should continue operating HTTP1!


That's not a valid comparison, since there are various effective decentralized 2FA methods available – unlike for DDoS protection.


Most of them are dynamic IPs. Some of them are infected mobile devices.

What happens when you log an attack from a device that is attacking you from a school or business WiFi network? Block the whole IP forever?

What if the user is on a CGNAT. Are you going to block the edge proxy for that entire ISP?

What if you're getting hit from a residential connection that gets a new rotated IP every couple of weeks? Block whoever gets that IP from now on?

Your solution doesn't stop attacks. It just stops regular users.


> What happens when you log an attack from a device that is attacking you from a school or business WiFi network? Block the whole IP forever?

No, but for a day perhaps.

> What if the user is on a CGNAT. Are you going to block the edge proxy for that entire ISP?

Maybe. If the ISP doesn’t bother doing anything about it (which is THEIR job, not mine as a website operator).

If the ISP can’t be arsed to do their job, why am I supposed to care about them at all?

> What if you're getting hit from a residential connection that gets a new rotated IP every couple of weeks? Block whoever gets that IP from now on?

Same as the CGNAT one. It’s the ISP’s job to handle their misbehaving customers.

If they refuse to do it and get complaints from their other customers that they’re getting blocked, maybe they’ll actually get to it.

> Your solution doesn't stop attacks. It just stops regular users.

No. It puts pressure on the ISPs to finally stop whining loudly when they receive an attack while closing their eyes on any attack originating from their network.

This is not sustainable.


Trust me when I say that you don't want the ISP's to inspect web traffic. That is not how to solve this. That is costly for the ISP and will drive up costs. It also makes supporting a website impossible. The ISP is assumed by all parties to be impartial. That assumption is required for the internet to be operational. Sure it might function your way, but it would be impossible to support.

And maybe Facebook and Google are big enough to push around the ISP's, but they are the only ones. Nobody will bat an eyelash if 15,000 Comcast users in Phoenix AZ can access your hokey-pokey website. Comcast doesn't care. The users won't blame their ISP. They will blame you, or whoever owns the hokey-pokey website. If you want traffic, you need to be equipped to handle traffic. You are the one with the internet facing infrastructure.

You are the one blocking traffic. Not the ISP. That is how it should be. The ISP should be impartial. You pay for connectivity. Consider yourself connected. For better or for worse. You are responsible for what you put onto that connection.


> Trust me when I say that you don't want the ISP's to inspect web traffic.

They do already. DPI on port 53 for DNS blocks or SNI inspection are common place. So are IP blocks.

> If you want traffic, you need to be equipped to handle traffic. You are the one with the internet facing infrastructure.

Slightly misleading wording here. More accurately your point is: « you want to run a website? Better have the infra to support traffic spikes comparable to that of a tech giant ». 400M rps would cost an unfathomable amount of money to be able to handle even just while dropping all packets.

> And maybe Facebook and Google are big enough to push around the ISP's, but they are the only ones. Nobody will bat an eyelash if 15,000 Comcast users in Phoenix AZ can access your hokey-pokey website.

Obviously yes. Too bad it’s better business for everyone to say nothing and just recommend you use their product.


ISP needs to start taking much more responsibility, currently they do not care or choose not to care to avoid having to deal with upset customers.

The fact that millions, if no more, devices can continue to access the internet regardless of how long they are compromised, is just crazy. I get that it put more responsibility upon end users to secure their devices, if they otherwise run the risk of get thrown of the internet, but I currently fail to see other options. Our device security still isn't good enough that we can just use them with reckless abandonment.

Any "solution" that attempts to fix the problem of increasing DDoS attacks and their damage that doesn't address the issue of compromised devices being allowed to roam free on the internet is a band aid at best.

And I can almost hear people complain that I'm arguing to throw compromised IoT, SCADA and monitoring devices of the internet, and yes I am. None of these things have any business being exposed to the public internet anyway.


Either the ISPs are common carriers that follow some sort of basic rules, or they try to make people happy and end up stepping all over people randomly.

Currently there are zero rules (outside of a ISP ToS maybe) that forbids what you’re talking about. Pretty much anywhere I think? Unless you know of a law against having a infected or out of date computer connected to the internet?

There really is no way to have both. The current situation, they generally only deal with problem cases that get reported to them. And I doubt anyone is going to bother doing so for the 20k machines in this attack.


It is not an ISPs job to analyze traffic patterns and attempt to stop the bad ones. Thats like saying its the job of the road crews to stop speeders


So who else? My proposal would be to have companies like Google, Microsoft, Amazon and hosting providers be able to report sources of DDoS attack to the ISPs who can then identify the customer and let the customer know that they have a week to fix the issue or lose connectivity.


That is terrifying.

Let Google, Amazon, and Apple decide who gets to use the internet and who gets put into a list.

That is way worse than giving Google the W3C. That is literally just handing them the internet and making everybody else on it subservient to Google.


Or that it's the ISP's job to cut off accounts that are downloading copyrighted works, or hashing cryptocurrency without paying taxes, etc.

It would be nice if the cell phone provider could send a text message reporting the problem. But how to distinguish it from spam?


> > What happens when you log an attack from a device that is attacking you from a school or business WiFi network? Block the whole IP forever?

> No, but for a day perhaps.

Then that's also a DDoS attack vector.


The idea clearly needs some work.

But, a slight defense of it—the really big providers can already sink a massive DDoS anyway. So, this is just a scheme to help little websites. It doesn’t really matter if a school, or even a cellphone network, can’t access my little website for an afternoon.

You’d have to decide if you want to send the block request. If you are hosting your personal blog, you’ll probably go for it regardless. If you are providing a small service; hosting git for a couple friends or whatever, you’ll probably block with some discretion.


The only answer is publicly-resourced protection and it's not that weird when you think about it. My apartment has a basic lock that any locksmith can undo and I'm safe because of my community and government protection (police, mental healthcare, justice system, etc...). Seems like the same logic should apply to my website or other digital property.


ISPs will gladly quarantine/rate limit folks for pirating stuff, why don't they use those tools to combat botnets? Though I could see this leading to a slippery slope of remote attestation for internet access.


> why don't they use those tools to combat botnets?

Because they probably don't care.


Community yes. Government protection no. When was the last time you heard of police stopping a break-in or making a successful investigation ?

Independent of police, in bad communities your neighbors are willing to break in. In good communities they don't.


Where this breaks down is that because of the nature of the internet and DDoS attacks it’s not something that can easily be solved with better policing - even identifying a perp might be near-impossible, and they might be in another country anyways. The government does try to prosecute botnets and DDoS attacks today, but it’s of limited success. Is there a practical solution here I’m missing?


I don't know about a "practical solution", but there are research efforts to think about new ways to build internets that mitigate some of these problems.

Here is one that I'm aware of: https://named-data.net


Why don't we just require major providers to provide a realtime list of IPs that are attacking so that we can drop them in a block list with an expiration date of a month or so.

If your computer is infected, I don't want to talk to you for a month. If it continues to be infected, I might up that to a year, or permanently ban you.

It's your problem. Go fix it.


I've been on the receiving end of "Your" (dynamic) "IP has been blocked."

I would greatly prefer not having my semi-randomized IP blocked because someone used it maliciously a year ago.


Key phrase: "a year"

If anybody is suggesting permanent bans of IPs, it's not me, at least not at a public level. I may very well choose privately to do that.

To clarify, I, personally chooses a blacklist policy. Not some other org. I think if you offload this onto any kind of external structure, it breaks again.

ADD: We make publicly-available, second-by-second, how the internet is broken and invite all comers, including me and my blocklist, to help fix it.

There's a huge commerical interest in NOT fixing the problem of random crap showing up, from dancing cats selling things to targeted inserted ads. I get it. We saw this same thing happen with adblockers. It's now going on with "free" VPNs. Can't fight that perverse incentive, so don't fight it.


Thing is, don’t care.

The problem is that ISPs whose customers are originating the attacks from don’t give a shit.

If we have to give up 1% of legitimate traffic to thwart 90% of attacks, it is a good deal.

If you and other customers complain to your ISP (or switch), eventually they’ll do something about it.

We can’t seriously keep on accepting that « thousands of compromised devices » is a fine reality for a « small botnet ».

These devices should be quarantined.


Sounds like a really great way to potentially destroy someone's career if they aren't terribly competent and you are. Infect some component in their home network that they don't even know is smart-enabled, and keep breaching their new devices, adding them to an active and conspicuous botnet. The only recourse for average Joe is to find expert help, which isn't really in abundant supply if you are a semi-sophisticated malicious actor.

I don't even want to think about the ramifications for small and medium sized businesses. Realistically, how much would it cost to be able to completely destroy a local competitor by paying someone to orchestrate a few events in succession.


This is an odd argument. The net is currently broken in many ways. One of the many ways is fake negative reviews. They easily destroy small businesses.

As I understand your argument, because the net has solid endpoints we can identify and isolate, we should ignore that fact. Instead we should create more and more complex systems to work around bad actors?

Bad actor takes control of grandma's computer. We should do all sorts of things except stop talking to grandma's computer? The thing, I would suspect, that most people would expect?

Businesses suffer from too much transparency. Got that part. They buy things that don't work and sometimes hurt people, even if they don't intend to do this. So far, so good. Where is the part where new businesses models are supposed to exist because some people made bad choices and the current models don't work? Why don't we just publicize the bad choices and let things work themselves out?

Sorry. Missing it.


Amazon definitely cares if they lose 1% of sales.

Guess who has more votes, you or Amazon.


I’m aware. Doesn’t make it sting less being in the receiving end of attacks all the time and seeing everyone collectively shrug.


Ok, that is somewhat fair.

My personal want / solution would simply be "everything gets an IPV6, and IPV4 gets deprecated. Everything using IPV4 gets an algorithm slapped on top to covert it into IPV6.

Dynamic ips become a thing of the past.

But I realize that is significantly easier said than done. (Makes Minecraft servers easier to setup though)


"Moreover, the lifespan of a given IP in a botnet is usually short so any long term mitigation is likely to do more harm than good." "As we can see, many new IPs spotted on a given day disappear very quickly afterwards." https://blog.cloudflare.com/technical-breakdown-http2-rapid-...


Great solution for a world without shared and dynamic ips.


Not as bad as one may think. It's proper feedback which can be acted upon.

Every reasonable connectivity provider would pay attention to this info, or face intense complaints from its users with shared and dynamic IPs. It would identify sources of attacks, and block them at higher granularity level, reporting that the range has been cleared. (If a provider lied, everyone would stop believing it, and the disgruntled customers would leave it.)

For shared hosting providers it would mean blocking specific user accounts using a firewall, notifying users, and maybe even selling cleanup services.

For home internet users, it also would mean blocking specific users, contacting them, helping them identify the infected machine at home.

It would massively drive patching of old router firmware which is often cracked and infected. Same for IoT stuff, infected PCs, malicious apps on phones, etc. There would be an incentive to stay clean.


If the one doing the blocking is not at FAANG it would do nothing of sorts. And FAANG benefit from DDoS by getting people into their walled cloud gardens.


Funny man, thinks big ISP cares you yourself blocked your own site from your own customers coming from the big ISP network.


No; with a shared hosting, somebody else manages to blacklist the IP that serves many paying customers.


Block the whole subnet and make it the ISP's problem?


It's interesting to me that most of the push-back so far has been for the business model of the internet, ie people need link traversal and content publishing in order to make money from advertising (implied, but not stated). Therefore we need to add yet another layer to the mix, the cloud providers, and start paying those guys.

And yes, we can block entire subnets. You own the IP addresses, you're responsible for stuff coming out of them, at least to the degree that it's not maliscious to the web as a whole. (but not the content itself, of course)

I'm calling bullshit on these assumptions. The internet is a communications tool. If it's not communicating, it's broken. If you provide dynamic IPs to clients that attack people, you're breaking it. It's not my problem or something I should ever be expected to pay for.

To be clear, my point is that we're suggesting yet another layer of commercial, paid crap on top of a broken system in order to fix it. It'd be phenomenally better just to publicly identify place and methods where it's broken and let other folks with more vested interests than information consumers worry about it. Hell, I'm not interested in paying for the current busload of bytes I'm currently consuming for every one sentence of value I receive.


Because when a single machine is infected, at one ISP, it's a good idea to block the whole subnet? I don't think any commercial activity could afford such security strategy, blindly blocking legit users by thousands.


So it’s the ISPs fault that my grandma never met a spam email that she didn’t want to click?

One of the things that gets lost in this kind of debate is that the vast, vast majority of Internet users are not experts in how the Internet, computers, or their phones work. So expecting them to be able to "just not get exploited" is a naive strategy and bringing the pain to the ISP feels counterproductive because what, realistically, can they do to stop all of their unsophisticated users from getting themselves exploited?

At the end of the day, the vast majority of the users of the Internet do not care how it works - they want their email, they want their cat videos, and they want to check up on their high school ex on Facebook. How can we rearchitect the Internet to be a) open b) privacy protecting, and c) robust against these kinds of attacks so that the targets of DDOS attacks have better protection than paying a third party and hoping that that third party can protect them?


How does the ISP solve it? Send a mass mail/email telling people to reset their devices because someone has a device with botnet malware?


That is their problem. Maybe the price needs to go up if you don't secure all your devices as the ISP is going to send a tech to your house. Or maybe the ISP has deep enough pockets to find a sue those cheap IOT device makers for not being secure thus funding their tech support team.


Egress filtering? A botnet DDOS stream should not look like normal network traffic...


> Sorry citizen, google services are inaccessible because the only ISP in your city sold a service to a bad actor.

> We might fix this, we might not, you DONT have a choice.

> Thank you for your continued business.


Indistinguishable from the kind of service I get from Google - the moment that I need a human involved I just close my account with whatever Google service is misbehaving and move on.


But you have other options which is my point.

(swap in any corpo-service provider you personally like the most)

Blanket banning subnet ranges from services because of the actions of someone else is 3rd world shit.


Hacker News nerds will argue all day long that the Internet is a utility when the argument happens to personally benefit them, then in the same breath say that a random network admin is justified in blocking a whole ISP subnet due to one “bad” actor. And of course by bad actor I mean person that almost certainly accidentally got themselves infected with malware by not understanding the completely Byzantine world of computers and the Internet.


Well, if someone had somehow gotten their house wires damaged in a way that causes brownouts to neighbours, wouldn't the electric company be justified in cutting off the house?


I‘m sure comcast is terrified that their users won’t be able to read my blog.


You are quite obviously speaking from the perspective as someone that wouldn’t be in a position to be making these calls.


Banning a large number of customers for an entire month? doesn’t make economic sense, it’ll be cheaper to just pay a big cloud provider for protection.

(not to mention the number of false positives you'd get, etc etc)


And now some of your services don't work because you blocked IP that turned out to be cloud service IP being reused for legit service


I propose to make a special "reject" packet. When a host, let's say 1.1.1.1, sends such packet to 2.2.2.2, all providers that see this packet, MUST reject any traffic from 2.2.2.2 to 1.1.1.1. This is very easy but very efficient and allows a single host to withstand the attack of any size.

There is no need for any central authority and no need to maintain any lists.


And then that can be abused...


No, it cannot. It is well-thought.


There are 2^128 ipv6 addresses.

If you store 1 bit (banned/unbanned) + a unix timestamp (ban expiration) for each of those IPs, that requires more storage space than exists many billion times over.

To store such a block table you propose would require more memory for routers than any router has ever had and ever will have.

An attacker could easily "flush" all entries in this table by, for example, banning a TB of ipv6 addresses from talking to them, surely resulting in all participating routers dropping other bans to store some of those.


We can store an IP address with a mask (ban subnets instead of separate addresses). Also, IPv6 is so rarely used, that I would ban whole address space for the time of attack.

For example, if an attack is coming from a country you where you don't have many paying customers, but where there are many infected devices due to use of pirated outdated software, it is easier to ban the whole country than to figure out who is infected and who is not.


> An attacker could easily "flush" all entries in this table by, for example, banning a TB of ipv6 addresses

We can set a limit of ban records per host to prevent it.


ban the entire /64. If banning the /64 is not enough, then ban the /48. If that is not enough, keep going up 4 bits until it is (most IPv6 allocations line up on a nibble boundary, hence the 4 bits)


That actually sounds like a really good idea. This is already implemented in the physical world (in a much less efficient way) in the form of “no spam” stickers and registrations.

Is there a reason other than inertia for why it hasn’t been implemented?


The main problem is how do you authenticate the request as being legitimate? It's already possible to spoof headers and "FROM-IP" (in fact, major DDoS attacks use just this as a replay attack, spoof a DNS request as coming from 1.1.1.1 and get a much larger response sent TO 1.1.1.1 from wherever).


You can send back a reply with a token to confirm ban.


ISPs do not want to spend money for fighting against criminals.


That doesn’t sound convincing to me. I mean I understand they don’t want to spend money but if cost is the only barrier it seems like that could be overcome somehow by interested parties.


It's not the costs, it's that some ISPs like getting money from spammers and criminals, and carefully look the other way.

And the other ISPs like getting paid for DDoS mitigation, so they also look the other way. There's no money to be made fixing the underlying problem.


That would be giving away some of the secret sauce on the part of the cloud providers. They are selling security as (part of their) service. There are some community shared lists of botnets ofcourse, but they may not be vry real time or very up to date.


You're assuming that identification of attack traffic is 100% correct which is unfortunately not the reality.


Nothing "forces botnets to up their game", they just want to make money (or in some cases, "watch the world burn"); I don't see how any coordination whatsoever would diminish these motivations.


So the email spam solution? Doesn't that come with its own list of problems?

Also, stupid question from someone not that familiar with DDoS, can't you flood the target with requests even if the source address will be rejected? Or even if the IP packet has a falsified source address?


Yes.


It's worth noting that features like the one that enabled Rapid Reset are pushed into standards by the exact same companies, because they are needed for performance at their scale.

So in a way this was partially caused by the existence of insanely big tech companies that need such features.


Either I misunderstood the issue, but it sounds like rapid reset was not the cause.


Rapid Reset is the name given to the technique behind the attack. The cause is a flaw in HTTP/2 stream multiplexing that enables this technique.


The actual solutions are:

1) Egress filtering by the ISPs

2) Better malware resistance and vulnerability mitigation on easily-compromised appliance and IoT devices

But neither is going to happen. 1 is a coordination problem. It has to be all or nothing, which can only be compelled by law, and we have no global laws and no global law enforcement mechanism. Some countries inevitably don't care and the rest won't partition the entire Internet by permanently cutting them off. 2 would probably make the entire Internet of Things and a whole lot of home computing just not happen because it isn't economically feasible. Poor security effectively acts as a tacit tax. We all pay a little bit each, but the tax is collected by criminals instead of governments.

Note that even your proposed solution here only works if 1 happened. Otherwise, source IP spoofing easily defeats a blocklist.


The problem with this type of attack is that you can't really catch it as MITM DDoS protection.

You're not seeing any SYN flood, just a bunch of TCP connections (equivalent of say search crawler), that are encrypted. Only after unpacking on loadbalancer those are visible as one TCP stream sheltering thousand HTTP2 streams.


For a side-hobby of mine (writing), I imagine what would happen if current trends would continue. Thus, big caveat, it's all just thought experiments, not realistic predictions of any kind.

For this particular scenario, the public Internet would get so bad ("enshitified") that people would tend to leave it alone. For essential public services, governments would set up their own networks disconnected from the Internet, where all devices and their connections must be authenticated to a person or corporation[^1]. Maybe something equivalent would exist for corporations and to enable e-commerce.

[^1] China works like this already, to a high degree.


You heard of New IP? The Huawei/Chinese plan to reform the Internet that keeps getting criticized for a variety of reasons. I haven't had the times to read the proposals proper, but the stuff about build trust directly into the network seems like it could solve this problem, at a price.

> Having security and trust be “intrinsic” to the network will require core layers to carry metadata about the users, applications and services being transported. If users need to register in order to have packets sent to their destination, the result is that network operators, and those who license the operators, can remove individual users’ access at any time.

https://dnsrf.org/.k-media/d3c1d810de1e98bdf7af7aa52406e837.... (critical of the proposal)


I've witnessed a few sustained (hours/days long) DDOS attacks that were straight up extortion: owners contacted with "give us money or we will keep your site offline".

Most of the time I see attacks lasting 15-20 minutes. I'm assuming it's either someone doing it "for the lulz" or some cyber warfare outfit testing their big guns.

I always consider the possibility of someone using DDOS to mask a more sophisticated attack.


Plenty of even quite-large websites just don't get attacked by DDoS attacks, because nobody has any particular reason to attack them.


You’re completely wrong.

All large sites regularly get attacked.

The average skiddie’s motivations are that they’re bored. So they DoS a site they use regularly just to see.

Heck they generally don’t even mean to cause damage per-se, and just think it’s a funny use of their evening.

You have to stop thinking DoS attacks are always particularly personal. They really often just aren’t, and it’s a monumental pain in the ass to be on the receiving end.


I run boring sites like government websites which say what kinds of recycling go in which color trash cans.

Well used, but never attacked.


Well, lucky you. Or unlucky me and everyone I know running a large website. Guess we’ll never know.


A spamhaus-like blacklist for botnet IPs is an interesting idea.

What if Google and Cloudflare collectively reverse-DoSed all the infected IPs, not by sending them any traffic, but simply by refusing to accept any connections from them to any part of their infrastructure?

Whoever is on those IPs will suddenly find that half the internet doesn't work anymore. Which is probably a good enough incentive for them to replace their router, format their PC, or whatever else is necessary to disinfect themselves.

In many parts of the world, landline IP allocations tend to be stable enough for this to have a real effect. Phones are a different story, but phones are also much less likely to be useful in a DDoS botnet. (The owner would immediately notice the sudden heat and data usage.)

If we're going to live in a world where a small number of companies own half the internet, at least they could use their power to do some good.


Google already does this. "Something on your network is causing unusual traffic, please fill in this captcha to continue".

And then you have to fill in a new captcha every 5 minutes or so just to keep using google maps/gmail/search.

It's kinda annoying, and usually the culprit is someone else who shares my IP, not me (ie. a school, university, workplace, open wifi).


For any googlers reading: This behaviour sometimes hits an ajax request (map data downloads when panning or zooming). The client side javascript then fails badly and the user sees a broken site rather than a captcha request.

Plz fix.


We don't need to share a block-list, but yes, blocking all traffic from open proxies (which nearly all the large attacks of the 2020s have used) is definitely part of the long-term plan. Any legitimate users of those proxies will experience some short-term pain, but they'll patch and life will go on.


> In many parts of the world, landline IP allocations tend to be stable enough for this to have a real effect.

And what about CGNAT?


In that scenario, it's on the ISP to clean their network of abuse, the same thing they would need to do if Gmail had blacklisted their IPs for spamming. After all, an ISP that can't connect to YouTube isn't going to stay in business for long.

People have been begging ISPs for ages to do a bit of egress filtering, for example, to prevent source address falsification. They've demonstrated time and again that they don't give a crap unless it affects their bottom line.


OK, but how should an ISP distinguish a good HTTP/2 connection from a bad one (I'm talking about this particular attack)? As far as I can tell, the DoS starts after the connection from bot to server is established, at which point the connection is fully encrypted. Should all ISPs MITM their clients to ensure that all traffic is good and proper?


Ever had your droplet suspended for using a vulnerable WordPress plugin?

Your droplet suddenly tries to log into somebody else's server 10 times a second. The target of the attack complains to DigitalOcean, "hey, one of your customers is trying to hack me!" and attaches a log of the login attempts. DigitalOcean assumes that the report was made in good faith, forwards it to you and immediately suspends your droplet. It won't be reactivated until you reply with evidence that you have at least tried to clean up the problem. If it happens again, you won't get off so easily.

I suppose that a similar system, in a more real-time fashion, could be set up between the maintainers of the blacklist (Google, Cloudflare, Amazon, etc.) and the ISPs. No need for the ISPs to sniff on everyone's traffic if they can rely on good-faith reports from the lion's mouth that somebody from port 52384 on 11.22.33.44 is DDoSing a Google property. Even with CGNAT, the port will identify the customer responsible.


Proliferation of low cost computing is the cause of this, not big players being able to mitigate this.

This is not coming from "known botnet IPs", this is from random infected devices. Some aren't even permanently doing this, just one request from a device per day - it already large enough to cause issues.


We could also treat it as a public security threat and act accordingly.


I think this is the key take away. Unfortunately world leaders are not tech savvy enough to even consider this a threat.


Yet. But we're getting there.


Which jurisdiction are you referring to with “we”?


Any that matters, I guess ("we" as in the collective of people).


> I'd really wish for them to come through some community coordinated list of botnet infected IPs or something.

The problem is that IP addresses are not a reliable identifier, especially for the kinds of folks whose routers have been infected by malware. Few ISPs hand out static IP addresses anymore. It's why online games no longer bother with IP bans anymore, because as soon as the target reboots their router they evade your ban and some other poor sap on the same ISP gets stuck with the flagged IP.


DDoS attacks were growing in size and frequency before these companies started creating products to address them. They took down sites, demanded ransom, and cost a lot of money in lost business and hosting bills.

If you want to complain about an actual working solution, that's your right, but realize that without an alternate solution you're advocating for giving small gangs the ability to disrupt everyone else's lives on a whim.


This is akin to the argument that bike helmets makes people less safe (and invariably has a comment about the Dutch and their safety record)


It is like saying effective spam filters are bad for email as a distributed system.

It's the spam that killed email, not the filters.


It’s a prisoner dilemma! The only way to win is for both service providers and “bad people” to not escalate. That’s not going to happen.


The typical way of dealing with "bad people" is to subject them to the criminal justice system (or vigilantism if the problem is bad enough and the criminal justice system is inadequate). This tends to reduce, but not eliminate, the misbehaving.

Improving the ability to track down and prosecute perpetrators tends to result in less anonymity/privacy, so that makes the problem challenging.

Thinking in the long/very-long term, we need to get more innovative with the underlying technology to mitigate abuse. I mentioned this effort https://named-data.net in another part of the thread.


>but I'd really wish for them to come through some community coordinated list of botnet infected IPs or something.

Using any kind of community coordinated IP ban is useless and would hurt a lot of people, millions(or even billions) of devices have dynamic IP addresses.

You would not stop botnets from DDoSing you and on top of that you'd block millions of legitimate users.


Do you remember the pre-DDoS mitigation days? Botnets could easily bring down major, important sites and make them unavailable to users. This caused monetary loss and could even cause life loss depending on the site. How is the previous state better than, well, not suffering from these problems?


> pay Google, Amazon or Cloudflare a protection tax.

Just FYI: hetzner has free DDoS https://www.hetzner.com/unternehmen/ddos-schutz

I'm sure other hosting companies also offers it.


Doesn't really work for those types of attacks

> In this final layer, we filter out attacks in the form of SYN floods, DNS floods, and invalid packets. We are also able to flexibly adapt to other unique attacks and to reliably mitigate them.

Which means any legit http2 connection will go just fine.

Even if such connection now triggers hundreds of substreams.

Push for end to end encrypted internet also means you can't really stop any more advanced attack. You could have just few dozen of hosts doing 20-30 connections each (i.e. "looking perfectly normal" for DDoS protection provider) generating tens of thousands per second in http2 streams.

I'm speaking from experience of mitigating attack like this. Our DDoS provider was near-useless..


For the higher layer attacks you have to have something like the "modified cryptominer in the browser" things that cloud flare and friends do now - those interstitial pages that pop up for a few seconds are doing mathematical hashing to burn processor time on your end - which greatly complicates the ability to DDoS.


Only for mini DDoS attacks - for larger ones they disable routing for your ip address. I guess they don‘t have the capacity to handle the big DDoS attacks nowadays.


Yep, and null-routing your IP is exactly what providers did in the days GP is longing for, and still do do, especially outside of big cloud providers.


> The fact that large cloud providers can handle huge DDoS attacks I think in the long run leads to a worse internet

Don't agree.

> the only solutions available are to pay Google, Amazon or Cloudflare a protection tax.

It's not.

> come through some community coordinated list of botnet infected IPs

How would that help?


A protection tax? You realize that DDoS protection costs them providers real money?


Yes, but cloud providers share that protection over all customers. Someone hosting their own websites needs the same level of protection just for themselves.

DDoS is really the only thing that you can't host yourself on your own machines in today's internet.


I don't think they do. There are a variety of DDoS attacks which require more expensive computing to detect


Leave it to HN to find the fly in the ointment when Google is mentioned.


In less words, it’s DDoS attackers that make the internet a worst place


Just like the law enforcement forced the criminals to up their games, so the only option we have is to pay tax?

Well, I wrote this comment to ridicule yours... but actually that was what happened.


> Cloudflare a protection tax

$NET gives away DDOS protection for free for non-businesses


I smell what you're stepping in here, but I grow more comfortable with the idea of big conglomerates continuing to improve their attack mitigation efforts on behalf of their locales when I compare this to the concept to vaccines.

Vaccines inevitably lead to stronger viruses, but would you argue we should go back and not have began to use them?

Cloudflare and Google may be some sites' only hope to staying alive in the event of network-driven attacks. I suppose this landscape is a double-edged sword.


[flagged]


There is Marek's disease, so you still need to show that GP is in the wrong.


One is about machines on the internet serving images and forum posts. This comment is low quality and is a form of name calling.


I was writing it with a "using antibiotics in absolutely every mundane product causes superbugs" energy actually, which is something that is really a problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: