HTTP compression: Difference between revisions

Content deleted Content added
Rescuing 1 sources and tagging 1 as dead.) #IABot (v2.0
Simon04 (talk | contribs)
(43 intermediate revisions by 31 users not shown)
Line 1:
{{Short description|Capability that can be built into web servers and web clients}}
{{HTTP}}
'''HTTP compression''' is a capability that can be built into [[web server]]s and [[web client]]s to improve transfer speed and bandwidth utilization.<ref>{{cite web|url=http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d52ff289-94d3-4085-bc4e-24eb4f312e0e.mspx?mfr=true|title=Using HTTP Compression (IIS 6.0)|accessdateaccess-date=9 February 2010|publisher=Microsoft Corporation}}</ref>
 
[[HTTP]] data is [[Data compression|compressed]] before it is sent from the server: compliant browsers will announce what methods are supported to the server before downloading the correct format; browsers that do not support compliant compression method will download uncompressed data. The most common compression schemes include [[gzip]] and [[DeflateBrotli]]; however, a full list of available schemes is maintained by the [[Internet Assigned Numbers Authority|IANA]].<ref>RFC 2616, Section 3.5: "The Internet Assigned Numbers Authority (IANA) acts as a registry for content-coding value tokens."</ref> Additionally, third parties develop new methods and include them in their products, such as the Google [[Shared Dictionary Compression for HTTP]] (SDCH) scheme implemented in the [[Google Chrome]] browser and used on Google servers.
 
There are two different ways compression can be done in HTTP. At a lower level, a Transfer-Encoding header field may indicate the payload of aan HTTP message is compressed. At a higher level, a Content-Encoding header field may indicate that a resource being transferred, [[Web cache|cached]], or otherwise referenced is compressed. Compression using Content-Encoding is more widely supported than Transfer-Encoding, and some browsers do not advertise support for Transfer-Encoding compression to avoid triggering bugs in servers.<ref>[https://code.google.com/p/chromium/issues/detail?id=94730 'RFC2616 "Transfer-Encoding: gzip, chunked" not handled properly'], [[Chromium (browser)|Chromium]] Issue 94730</ref>
 
==Compression scheme negotiation==
In most cases, excluding the SDCH, theThe negotiation is done in two steps, described in RFC 2616 and RFC 9110:
 
1. The [[web client]] advertises which compression schemes it supports by including a list of tokens in the [[HTTP request]]. For ''Content-Encoding'', the list is in a field called ''Accept-Encoding''; for ''Transfer-Encoding'', the field is called ''TE''.
<sourcesyntaxhighlight lang="http" highlight=3>
GET /encrypted-area HTTP/1.1
Host: www.example.com
Accept-Encoding: gzip, deflate
</syntaxhighlight>
</source>
2. If the server supports one or more compression schemes, the outgoing data may be compressed by one or more methods supported by both parties. If this is the case, the server will add a ''Content-Encoding'' or ''Transfer-Encoding'' field in the HTTP response with the used schemes, separated by commas.
 
<sourcesyntaxhighlight lang="http" highlight=9>
HTTP/1.1 200 OK
Date: mon, 26 June 2016 22:38:34 GMT
Line 27 ⟶ 28:
Content-Type: text/html; charset=UTF-8
Content-Encoding: gzip
</syntaxhighlight>
</source>
 
The [[web server]] is by no means obligated to use any compression method&nbsp;– this depends on the internal settings of the web server and also may depend on the internal architecture of the website in question.
 
In case of SDCH a dictionary negotiation is also required, which may involve additional steps, like downloading a proper dictionary from the external server.
 
==Content-Encoding tokens==
The official list of tokens available to servers and client is maintained by IANA,<ref>{{cite web|url=httphttps://www.iana.org/assignments/http-parameters/http-parameters.xhtml#content-coding|title=Hypertext Transfer Protocol Parameters - HTTP Content Coding Registry|publisher=IANA|accessdateaccess-date=18 April 2014}}</ref> and it includes:
 
*br – [[Brotli]], a compression algorithm specifically designed for HTTP content encoding, defined in {{IETF RFC |7932|link=no}} and implemented in Mozilla Firefox release 44 andall Chromiummodern releasemajor 50browsers.
*[[compress]] – UNIX "compress" program method (historic; deprecated in most applications and replaced by gzip or deflate)
*deflate – compression based on the [[DEFLATE|deflate]] algorithm (described in RFC{{IETF RFC|1951|link=no}}), a combination of the [[LZ77_and_LZ78#LZ77|LZ77]] algorithm and Huffman coding, wrapped inside the [[zlib]] data format (RFC{{IETF RFC|1950|link=no}});
*exi – W3C [[Efficient XML Interchange]]
*[[gzip]] – GNU zip format (described in RFC{{IETF RFC|1952|link=no}}). Uses the [[DEFLATE|deflate]] algorithm for compression, but the data format and the checksum algorithm differ from the "deflate" content-encoding. This method is the most broadly supported as of March 2011.<ref>{{cite web|url=http://www.vervestudios.co/projects/compression-tests/results|title=Compression Tests: Results|last=|first=|date=|website=|publisher=Verve Studios, Co|archive-url=https://web.archive.org/web/20120321182910/http://www.vervestudios.co/projects/compression-tests/results|archive-date=21 March 2012|accessdateaccess-date=19 July 2012}}</ref>
*[[Identity function|identity]] – No transformation is used. This is the default value for content coding.
*[[Pack200|pack200-gzip]] – Network Transfer Format for Java Archives<ref>{{cite web|url=https://jcp.org/en/jsr/detail?id=200|title=JSR 200: Network Transfer Format for Java Archives|publisher=The Java Community Process Program}}</ref>
*[[zstd]][[Zstandard]] compression, defined in RFC{{IETF RFC|8478|link=no}}
 
In addition to these, a number of unofficial or non-standardized tokens are used in the wild by either servers or clients:
 
*[[bzip2]] – compression based on the free bzip2 format, supported by [[lighttpd]]<ref>{{cite web|url=http://redmine.lighttpd.net/projects/1/wiki/Docs_ModCompress|title=ModCompress - Lighttpd|publisher=lighty labs|accessdateaccess-date=18 April 2014}}</ref>
*[[Lempel–Ziv–Markov_chain_algorithm|lzma]] – compression based on (raw) LZMA is available in Opera 20, and in elinks via a compile-time option<ref>[http://elinks.or.cz/documentation/html/manual.html-chunked/ch01s07.html#CONFIG-LZMA elinks LZMA decompression]</ref>
*peerdist<ref>{{cite web|url=http://msdn.microsoft.com/en-us/library/dd304322%28v=PROT.10%29.aspx|title=[MS-PCCRTP]: Peer Content Caching and Retrieval: Hypertext Transfer Protocol (HTTP) Extensions|publisher=Microsoft|accessdateaccess-date=19 April 2014}}</ref> – Microsoft Peer Content Caching and Retrieval
*[[rsync]]<ref>{{cite web |title=rproxy: Protocol Definition for HTTP rsync Encoding |url=https://rproxy.samba.org/doc/protocol/protocol.html |website=rproxy.samba.org}}</ref> – [[Delta_encoding#Delta_encoding_in_HTTP|delta encoding in HTTP]], implemented by a pair of ''rproxy'' proxies.
*[[sdch]]<ref>{{cite web|url=http://lists.w3.org/Archives/Public/ietf-http-wg/2008JulSep/att-0441/Shared_Dictionary_Compression_over_HTTP.pdf|title=A Proposal for Shared Dictionary Compression Over HTTP|publisher=Google|last1=Butler|first1=Jon|author2=Wei-Hsin Lee|last3=McQuade|first3=Bryan|last4=Mixter|first4=Kenneth}}</ref><ref>{{cite web|url=https://groups.google.com/forum/#!forum/SDCH|title=SDCH Mailing List|publisher=Google Groups}}</ref> – Google Shared Dictionary Compression for HTTP, based on [[VCDIFF]] (RFC 3284)
*xpress - Microsoft compression protocol used by Windows&nbsp;8 and later for Windows Store application updates. [[LZ77_and_LZ78#LZ77|LZ77]]-based compression optionally using a Huffman encoding.<ref>{{cite web|url=https://msdn.microsoft.com/en-us/library/Hh554002.aspx|title=[MS-XCA]: Xpress Compression Algorithm|accessdateaccess-date=29 August 2015}}</ref>
*[[XZ Utils|xz]] - LZMA2-based content compression, supported by a non-official Firefox patch;<ref>{{cite web|url=https://wiki.mozilla.org/LZMA2_Compression|title=LZMA2 Compression - MozillaWiki|accessdateaccess-date=18 April 2014}}</ref> and fully implemented in mget since 2013-12-31.<ref>{{cite web|url=https://github.com/rockdaboot/mget|title=mget GitHub project page|last=|firstwebsite=[[GitHub]] |access-date=|website=|publisher=|accessdate=6 January 2017}}</ref>
 
==Servers that support HTTP compression==
*[[SAP NetWeaver]]
*[[Internet Information Services|Microsoft IIS]]: built-in or using third-party module
*[[Apache HTTP Server]], via '''[httphttps://httpd.apache.org/docs/current/mod/mod_deflate.html mod_deflate]''' (despite its name, only supporting gzip<ref>{{cite web|url=http://httpd.apache.org/docs/2.4/mod/mod_deflate.html#supportedencodings|title=mod_deflate - Apache HTTP Server Version 2.4 - Supported Encodings}}</ref>), and '''[https://httpd.apache.org/docs/current/mod/mod_brotli.html mod_brotli]'''
*[[Hiawatha (web server)|Hiawatha HTTP server]]: serves pre-compressed files<ref>{{cite web|url=http://www.hiawatha-webserver.org/manpages|title=Extra part of Hiawatha webserver's manual}}</ref>
*[[Cherokee (Webserver)|Cherokee HTTP server]], On the fly gzip and deflate compressions
*[[Oracle iPlanet Web Server]]
*[[Zeus Web Server]]
*[[lighttpd]]
*[[lighttpd]], via '''mod_compress''' and the newer '''mod_deflate''' (1.4.42+)
*[[nginx]] – built-in
*Applications based on [[Tornado (web server)|Tornado]], if "compress_response" is set to True in the application settings (for versions prior to 4.0, set "gzip" to True)
Line 73 ⟶ 72:
*[[HAProxy]]
*[[Varnish (software)|Varnish]] – built-in. Works also with [[Edge Side Includes|ESI]]
*[https://line.github.io/armeria/ Armeria] – Serving pre-compressed files<ref>{{cite web|url=https://line.github.io/armeria/server-http-file.html#serving-pre-compressed-files|title=Serving static files part of Armeria's documentation}}</ref>
*[[NaviServer]] – built-in, dynamic and static compression
*[[Caddy (web server)|Caddy]] – built-in via [https://caddyserver.com/docs/caddyfile/directives/encode encode]
 
Many [[content delivery network]]s also implement HTTP compression to improve speedy delivery of resources to end users.
 
The compression in HTTP can also be achieved by using the functionality of [[server-side scripting]] languages like [[PHP]], or programming languages like [[Java (programming language)|Java]].
 
Various online tools exist to verify a working implementation of HTTP compression. These online tools usually request multiple variants of a URL, each with different request headers (with varying Accept-Encoding content). HTTP compression is considered to be implemented correctly when the server returns a document in a compressed format.<ref>{{ cite web|url=https://httptools.dev/gzip-brotli-check|title=How does the gzip compression check work? }} httptools.dev, retrieved 10 April 2022.</ref> By comparing the sizes of the returned documents, the effective compression ratio can be calculated (even between different compression algorithms).
 
==Problems preventing the use of HTTP compression==
A 2009 article by Google engineers Arvind Jain and Jason Glasgow states that more than 99 person-years are wasted<ref name="google-use-compression">{{cite web|url=https://developers.google.com/speed/articles/use-compression|title=Use compression to make the web faster|accessdateaccess-date=22 May 2013|publisher=Google DevelopersInc.}}</ref> daily due to increase in page load time when users do not receive compressed content. This occurs when anti-virus software interferes with connections to force them to be uncompressed, where proxies are used (with overcautious web browsers), where servers are misconfigured, and where browser bugs stop compression being used. Internet Explorer 6, which drops to HTTP 1.0 (without features like compression or pipelining) when behind a proxy&nbsp;– a common configuration in corporate environments&nbsp;– was the mainstream browser most prone to failing back to uncompressed HTTP.<ref name="google-use-compression" />
 
Another problem found while deploying HTTP compression on large scale is due to the '''deflate''' encoding definition: while HTTP 1.1 defines the '''deflate''' encoding as data compressed with deflate (RFC 1951) inside a [[zlib]] formatted stream (RFC 1950), Microsoft server and client products historically implemented it as a "raw" deflated stream,<ref>{{cite web|url=https://stackoverflow.com/questions/9170338/why-are-major-web-sites-using-gzip/9186091#9186091|title=deflate - Why are major web sites using gzip?|publisher=Stack Overflow|accessdateaccess-date=18 April 2014}}</ref> making its deployment unreliable.<ref>{{cite web|url=http://www.vervestudios.co/projects/compression-tests/|title=Compression Tests: About|last=|first=|date=|website=|publisher=Verve Studios|archive-url=https://web.archive.org/web/20150102111552/http://www.vervestudios.co/projects/compression-tests/|archive-date=2 January 2015|accessdateaccess-date=18 April 2014}}</ref><ref>{{cite web|url=http://zoompf.com/blog/2012/02/lose-the-wait-http-compression|title=Lose the wait: HTTP Compression|publisher=Zoompf Web Performance|accessdateaccess-date=18 April 2014}}</ref> For this reason, some software, including the Apache HTTP Server, only implement '''gzip''' encoding.
 
==Security implications==
{{main article|CRIME|BREACH}}
 
Compression allows a form of [[chosen plaintext]] attack to be performed: if an attacker can inject any chosen content into the page, they can know whether the page contains their given content by observing the size increase of the encrypted stream. If the increase is smaller than expected for random injections, it means that the compressor has found a repeat in the text, i.e. the injected content overlaps the secret information. This is the idea behind CRIME.
 
In 2012, a general attack against the use of data compression, called [[CRIME]], was announced. While the CRIME attack could work effectively against a large number of protocols, including but not limited to TLS, and application-layer protocols such as SPDY or HTTP, only exploits against TLS and SPDY were demonstrated and largely mitigated in browsers and servers. The CRIME exploit against HTTP compression has not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined.
 
In 2013, a new instance of the CRIME attack against HTTP compression, dubbed BREACH, was published. A BREACH attack can extract login tokens, email addresses or other sensitive information from TLS encrypted web traffic in as little as 30 seconds (depending on the number of bytes to be extracted), provided the attacker tricks the victim into visiting a malicious web link.<ref name=Gooin20130801>{{cite web|last=Goodin|first=Dan|title=Gone in 30 seconds: New attack plucks secrets from HTTPS-protected pages |url=https://arstechnica.com/security/2013/08/gone-in-30-seconds-new-attack-plucks-secrets-from-https-protected-pages/|work=Ars Technica|publisher=Condé Nast|accessdateaccess-date=2 August 2013|date=1 August 2013}}</ref> All versions of TLS and SSL are at risk from BREACH regardless of the encryption algorithm or cipher used.<ref>{{cite web|last=Leyden|first=John|title=Step into the BREACH: New attack developed to read encrypted web data |url=https://www.theregister.co.uk/2013/08/02/breach_crypto_attack/|work=The Register|accessdateaccess-date=2 August 2013|date=2 August 2013}}</ref> Unlike previous instances of [[CRIME (security exploit)|CRIME]], which can be successfully defended against by turning off TLS compression or SPDY header compression, BREACH exploits HTTP compression which cannot realistically be turned off, as virtually all web servers rely upon it to improve data transmission speeds for users.<ref name=Gooin20130801/>
 
As of 2016, the TIME attack and the HEIST attack are now public knowledge.<ref>{{cite web|last=Sullivan|first=Nick|title=CRIME, TIME, BREACH and HEIST: A brief history of compression oracle attacks on HTTPS |url=https://www.helpnetsecurity.com/2016/08/11/compression-oracle-attacks-https/|accessdateaccess-date=16 August 2016|date=11 August 2016}}</ref><ref>{{cite web|last=Goodin|first=Dan|title= HEIST exploit — New attack steals SSNs, e-mail addresses, and more from HTTPS pages|url=https://arstechnica.com/security/2016/08/new-attack-steals-ssns-e-mail-addresses-and-more-from-https-pages/|accessdateaccess-date=16 August 2016|date=3 August 2016}}</ref><ref>{{cite web|last=Be'ery|first=Tal|title=A Perfect Crime? TIME will tell.|url=https://www.owasp.org/images/e/eb/A_Perfect_CRIME_TIME_Will_Tell_-_Tal_Beery.pdf}}</ref><ref>{{cite web|last=Vanhoef|first=Mathy|title=HEIST: HTTP Encrypted Information can be Stolen through TCP-windows|url=https://www.blackhat.com/docs/us-16/materials/us-16-VanGoethem-HEIST-HTTP-Encrypted-Information-Can-Be-Stolen-Through-TCP-Windows-wp.pdf}}</ref>
 
==References==
Line 94 ⟶ 102:
 
==External links==
*RFC{{IETF RFC|2616|link=no}}: Hypertext Transfer Protocol – HTTP/1.1
*{{IETF RFC|9110|link=no}}: HTTP Semantics
*[httphttps://www.iana.org/assignments/http-parameters HTTP Content-Coding Values] by Internet Assigned Numbers Authority
*[http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:Modcompress Compression with lighttpd]
*[http://www.codinghorror.com/blog/2004/08/http-compression-and-iis-6-0.html Coding Horror: HTTP Compression on IIS 6.0] {{Webarchive|url=https://web.archive.org/web/20140206020708/http://www.codinghorror.com/blog/2004/08/http-compression-and-iis-6-0.html |date=2014-02-06 }}
*{{webarchive |url=https://web.archive.org/web/20110716033901/http://www.15seconds.com/Issue/020314.htm |date=July 16, 2011 |title=15 Seconds: Web Site Compression }}
*[http://www.serverwatch.com/tutorials/article.php/3514866 Using HTTP Compression] {{Webarchive|url=https://web.archive.org/web/20160314155152/http://www.serverwatch.com/tutorials/article.php/3514866 |date=2016-03-14 }} by Martin Brown of Server Watch
*[http://www.http-compression.com/ HTTP Compression]: resource page by the founder of VIGOS AG, Constantin Rack
*[https://web.archive.org/web/20060411174003/http://www.serverwatchdevshed.com/tutorialsc/article.phpa/PHP/Using-HTTP-Compression-in-PHP-Make-Your-Web-Pages-Load-Faster/3514866 Using HTTP Compression] byin Martin Brown of Server WatchPHP]
*[http://www.devshed.com/c/a/PHP/Using-HTTP-Compression-in-PHP-Make-Your-Web-Pages-Load-Faster/ Using HTTP Compression in PHP]{{Dead link|date=January 2020 |bot=InternetArchiveBot |fix-attempted=yes }}
*[https://web.archive.org/web/20120430023716/https://banu.com/blog/38/dynamic-and-static-http-compression-with-apache-httpd/ Dynamic and static HTTP compression with Apache httpd]