But there are some disadvantages to this model. In the case of TLS (the most common standard used for sending encrypted data across in the Internet and the protocol your browser uses with visiting an https:// web site) the layering of TLS on top of TCP can cause delays to the delivery of a web page.
That’s because TLS divides the data being transmitted into records of a fixed (maximum) size and then hands those records to TCP for transmission. TCP promptly divides those records up into segments which are then transmitted. Ultimately, those segments are sent inside IP packets which traverse the Internet.
In order to prevent congestion on the Internet and to ensure reliable delivery, TCP will only send a limited number of segments before waiting for the receiver to acknowledge that the segments have been received. In addition, TCP guarantees that segments are delivered in order to the application. Thus if a packet is dropped somewhere between sender and receiver it’s possible for a whole bunch of segments to be held in a buffer waiting for the missing segment to be retransmitted before the buffer can be released to the application.
TLS and TCP
What this means for TLS is that a large record that is split across multiple TCP segments can encounter unexpected delays. TLS can only handle
complete
records and so a missing TCP segment delays the whole TLS record.
At the start of a TCP connection as the TCP slow start occurs the record could be split across multiple segments that are delivered relatively slowly. During a TCP connection one of the segments that a TLS record has been split into may get lost causing the record to be delayed until the missing segment is retransmitted.
Thus it’s preferable to not use a fixed TLS record size but
adjust the record size
as the underlying TCP connection spins up (and down in the case of congestion). Starting with a small record size helps match the record size to the segments that TCP is sending at the start of a connection. Once the connection is running the record size can be increased.
CloudFlare uses NGINX to handle web requests. By default NGINX does not support dynamic TLS record sizes. NGINX has a fixed TLS record size with a default of 16KB that can be adjusted with the
ssl_buffer_size
parameter.
Dynamic TLS Records in NGINX
We modified NGINX to add support for dynamic TLS record sizes and are open sourcing our patch. You can find it
here
. The patch adds parameters to the NGINX
ssl
module.
ssl_dyn_rec_size_lo
: the TLS record size to start with. Defaults to 1369 bytes (designed to fit the entire record in a single TCP segment: 1369 = 1500 - 40 (IPv6) - 20 (TCP) - 10 (Time) - 61 (Max TLS overhead))
ssl_dyn_rec_size_hi
: the TLS record size to grow to. Defaults to 4229 bytes (designed to fit the entire record in 3 TCP segments)
ssl_dyn_rec_threshold
: the number of records to send before changing the record size.
Each connection starts with records of the size
ssl_dyn_rec_size_lo
. After sending
ssl_dyn_rec_threshold
records the record size is increased to
ssl_dyn_rec_size_hi
. After sending an additional
ssl_dyn_rec_threshold
records with size
ssl_dyn_rec_size_hi
the record size is increased to
ssl_buffer_size
.
ssl_dyn_rec_timeout
: if the connection idles for longer than this time (in seconds) that the TLS record size is reduced to
ssl_dyn_rec_size_lo
and the logic above is repeated. If this value is set to 0 then dynamic TLS record sizes are disabled and the fixed
ssl_buffer_size
will be used instead.