When research done on HTTP Desync Attacks by Portswigger researcher James Kettle in 2019 was presented by him at DEFCON, the concept of HTTP Request Smuggling was re-popularized after its initial discovery in 2005. In short, the attack utilizes a “disagreement” between front-end and back-end servers (or other intermediary devices like firewalls) as to where a request ends and another begins. This discrepancy can be used to bypass security measures and steal a victim’s authentication cookies/headers, craft Cross-Site Scripting attacks, or redirect a victim to a malicious domain.
Considering that I am not an expert on the topic, I won’t try and poorly reexplain a concept that has been otherwise documented by researchers with some example labs for interested users to practice on. I will, however, use it as a stepping stone to talk about two topics I find important:
“A Constantly Evolving Threat Landscape”
You’ve probably heard this phrase in about every cybersecurity job description posted to a public forum and it sounds terribly cheesy. It is important to understand that our technologies are constantly evolving, but it’s important not to only focus on what’s new. Revisiting and researching older attack vectors can be just as valuable to the cybersecurity community as digging into recently published software.
Technologies fall in and out of popularity over time, and not every company is always up to date with the latest security upgrades. It only takes one curious mind and some extra time to find a vulnerability in a piece of deprecated software or a protocol that hasn’t changed since the mid-’90s.
The Danger of Assumptions
As platforms and software change over time, developers can be prone to making assumptions about how one piece of technology interprets the data of another. It’s kind of like using an Oxford comma. The writer may have intended one meaning, but depending on how the sentence is formatted, the reader might understand it as something else. In this case, the front-end server parses a chunked request as a single request whereas the back-end server parses it as two.
When you’re searching for vulnerabilities in the wild, it’s always important to keep in mind the assumptions a developer might have made when putting their product together or the assumptions a service team might have made when configuring their internal systems.
Did the developer assume that a specific form field would only receive positive integers from an end user? What happens when you enter a negative value?
How about that software administrative page that should only be available in a pre-production environment? Did the developers remember to disable it before pushing an application to the live server?
If a company just upgraded their back-end with the most cutting edge databases, have they made sure all the requests coming in from their front-end are processed the way they’re expecting?
Just because you’re expecting a technology to work a certain way doesn’t mean that’s always what happens. When people become reliant on programs or protocols that haven’t changed for years, they become comfortable, complacent, and don’t necessarily take the time to check how they interact with new integrations.
There are certain topics in security that may seem done to death, and you’d think that there’s nothing that decades of security researchers wouldn’t have found already, but the reemergence and severity of HTTP Request Smuggling vulnerabilities contradicts that idea. Even if you’re newer to cybersecurity or you’re interested in a part of the field that seems “oversaturated” by existing content, doing research on old vulnerabilities with a new perspective can potentially shed light on new findings.
May you research what you love and may your findings be plentiful,