Blog: TLS 1.3, ESNI, DoH, interception... it's not that complicated πŸ˜‰

Author: Vlad
Published on

Patrowl's blog - TLS 1.3, ESNI, DoH, interception

Although finally not very new, I'm taking advantage of the vacations to talk about a topic that is often misunderstood: TLS 1.3.

With NoLimitSecu, we recorded an episode last year ( but I find a few things missing which you will find detailed below.

⚠️ This is a bit off topic, but think about the deprecation of TLS 1.0 and 1.1 by almost everyone on the web between summer and fall 2020.


The subject has already been covered many times: these are the names of the most widely used encryption protocols used in particular to encrypt web exchanges (HTTP). SSL (Secure Sockets Layers) being the old version and TLS (Transport Layer Security) the new one.

When a client (browser) connects to a website (, Yes, I'm doing some work πŸ˜‰ ), if the access is not already encrypted, in general, a redirection is made to the encrypted version of the site because eventually, browsers will only tolerate encrypted sites.

To put it very simply, SSL and TLS offer two main features:

  • Prove the identity of a service (website, mail service, API...) with a certificate mechanism. The server sends the client a certificate signed by a trusted certification authority and the client's device is able to verify its authenticity;
  • Encrypting the exchanges thanks to different symmetrical cryptographic suites such as the American standard AES (and key exchange mechanisms that I will not detail here). SSL is full of flaws and is therefore no longer recommended. On their side, the first versions of TLS are also vulnerable to various attacks and it is rather recommended to use at least TLS 1.2.

For details, I refer you to "Security End of TLS 1.0 and TLS 1.1".

Server Name Indication / SNI and TLS

In the past, it was not possible to provide several different SSL-enabled services on the same IP address, at least not on the same port. On IP address and port 443 (default port for HTTPS), it was for example not possible to host an encrypted website and an SSL VPN

Indeed, when negotiating the SSL encryption, the :

Resolved the server's IP address from its domain name ( resolved IP address Connected to the server's IP address (on port 443 but let's skip that detail) And received in return the signed certificate containing, among other things, a "Common Name / CN" field with the server name ( m) and allowing to prove its identity. The problem was that the server had no way of knowing the name that the client was trying to reach ( ) and could only associate a single certificate with it, and therefore a single domain name, and therefore a single service (in fact this is partly false because some tools such as OpenVPN allowed several services to be hosted, but that's not the point, the detection being done after the SSL/TLS negotiation).

We were then forced to dedicate an IP per service (because it was not possible to use another port than 443).

An acceptable solution was to use a certificate with a main name ( ) and also containing a "Subject Alternative Names / SAN" field. In this field were put the names of the other services (, But this solution didn't allow a scaling nor to be dynamic (a certificate is signed for a long time and having to generate a new one, as soon as we have to add or remove a service, is not practical).

The problem was quickly corrected with the "Server Name Indication / SNI" functionality, an extension of the TLS protocol dating from 2003 and allowing the client to specify in its request the domain name of the service to be reached. For the brave, I recommend reading RFC 3546

An analogy could be the following:

Before, the letter carriers had only the address and no name. There was only one possible recipient per postal address; The NIS is like having a janitorial service that has a list of names of people at a given postal address and collects the mail from the letter carrier; The SNI would be equivalent to the letter carrier finally having the name and each inhabitant being able to manage his own name on his mailbox. Since then, the SAN field has simply replaced the CN field which is only supported by backward compatibility (!msg/security-dev/IGT2fLJrAeo/csf_1Rh1AwAJ)

Here is a diagram I took from CloudFlare:

Patrowl's blog - TLS 1.3, ESNI, DoH, interception

Nowadays, the feature is widely used, especially by Cloud providers, SaaS providers and especially CDNs (Content Delivery Network like Cloud Flare, Akamai, CD Network, CloudFront...) that host hundreds of services (sites) on battalions of identical (or almost) servers.

Encrypt-then-MAC, MAC-then-Encrypt, Je-comprends-riennnnnn...

In general, when encrypted messages are exchanged, in order to protect the integrity of the message (that it has not been modified), the encrypted message is accompanied by data allowing it to be authenticated. This is not authentication in the sense of a user (password, strong authentication...) but authentication of the message, i.e. to prove that it has not been modified, that its integrity has not been altered.

In general (that makes 2 times, can we say "in general"? πŸ˜† ), it is a condensate generated from the message (encrypted or not, we will see that later), technical data and sometimes random data in order to make impossible the replay ("Nonce" in English or sometimes noted IV for "InitializationVector").

The term used is "MAC" for Message Authentication Code.

There are then several ways to realize this authentication treatment (it is detailed just after):

  • I do nothing πŸ‘ ;
  • I generate the condensate of the message in clear, then I encrypt the whole: MAC-then-Encrypt / Authenticate then Encrypt / MtE
  • I generate the digest of the plaintext message, encrypt the message and send both (encrypted message + digest): Encrypt-and-MAC / Encrypt and Authenticate / E&M
  • I encrypt my message, generate its digest and send both (encrypted message + digest of the encrypted): Encrypt-then-MAC / Encrypt then Authenticate / EtM

Here are the most commonly cited examples:

  • IPSec does Encrypt-then-MAC
  • SSL (not TLS) does MAC-then-Encrypt
  • SSH does Encrypt-and-MAC

I don't do anything

I'm not going to dwell on it, this is the worst case as messages can be modified and/or replayed, with no way of knowing, this is crap 😜:


Let's assume a plaintext message "msg" (or Plaintext) and a random number "random" (or Nonce).

MAC-then-Encryption amounts to sending (watch out for the parentheses): Encrypt(msg connected with Hash(random + msg))

I find the Wikipedia schematics confusing because of the confusing use of Key1 and Key2 but it allows to illustrate :

Patrowl's blog - TLS 1.3, ESNI, DoH, interception

This ensures the integrity of the plaintext, but not of the ciphertext. Without decrypting, it is not possible to know if the message has been modified.

This mode is susceptible to several attacks


Encrypt-and-MAC is the same as sending: Encrypt(msg) concatenated with Hash(random + msg)

Patrowl's blog - TLS 1.3, ESNI, DoH, interception

The integrity of the ciphertext is not ensured, which makes it possible to launch attacks by choosing the ciphertext. On the other hand, the integrity of the plaintext is assured, but it is necessary to decrypt it, which can generate errors and be exploited.

Finally, depending on the implementation, if there is no counter in the "random" data, it is possible to carry out known plaintext attacks on the condensate.


Encrypt-then-MAC is the same as sending: Encrypt(msg) concatenated with Hash(random + Encrypt(msg))

Patrowl's blog - TLS 1.3, ESNI, DoH, interception

This mode allows to guarantee the integrity of the ciphered text but if the condensation algorithm is broken or weakened, it becomes possible to carry out attacks on "random" of the "tampering" type.

This is still the most robust mode to date.

TLS 1.3

The last secure version of TLS being 1.2, without going into details, it would be easy to believe that TLS 1.3 is only a simple evolution whereas it is in fact a real breakthrough and it would have been preferable to call it TLS 2.0 but the naming is something complicated (see the video at the end, at 38:54).

This version brings several notable changes:

  • Better speed during the negotiation, by reducing the round trips between the client and the server to only one exchange (against two previously) named "RTT" (Round Trip Time Resumption) and where is the last R you ask? Good question!. If the client has already connected to the server, then we are in a case of optimization named "0-RTT" (zero round trip) allowing to resume a past connection;
  • Disabling of all weak or risky cryptographic suites, still supported by TLS 1.2. With TLS 1.3, you are obliged to use strong algorithms, whether for encryption, hashes or block-by-block encryption protocols;
  • No more static keys in RSA and Diffie-Hellman key exchanges. Forward secrecy is now mandatory, i.e. the keys change throughout the exchange and it is no longer possible to record the traffic, find the key and decrypt the traffic afterwards;
  • Cryptographic alternatives to the NIST and NSA recommendations, providing more confidence. This is because of Dual_EC_DRBG, the pseudo-random number generation algorithm, compromised by the NSA, standardized in FIPS 140-2 and widely distributed (see "Security NSA and PRNG", "FUN NSA backdoor in OpenSSL never worked ( FIPS 140-2 )", "Crypto NIST removes Dual EC DRBG (NSA) from its guide", "Security Dual EC DRBG all history / NSA"). The 25519 elliptic curve is supported and presents a free alternative to the NIST and NSA curves;
  • Similarly, the free symmetric encryption algorithm ChaCha20 and the asymmetric EdDSA are supported, to provide alternatives to two of NIST and NSA;
  • Requirement to authenticate encrypted messages with in particular 2 modes: GCM (Galois Counter Mode) and CCM (Counter with CBC-MAC). For details, I refer you to the Wikipedia schema which is rather well done: ;
  • And many others adjustments: exchange optimizations, reduction of the amount of data exchanged in clear... Another long debated difference is the possibility of intercepting flows by decrypting them. This is quite possible with TLS 1.3 and a protocol dedicated to this has even been added: ETLS (Enterprise TLS), sometimes called "TLS interception for grostocards".

This protocol, or option of TLS 1.3, uses, among other things, a static Diffie-Hellman key and allows a third party to retrieve the encrypted traffic and a copy of this key. To make it simple, this disables Forward Secrecy. To make it even simpler: it's poop πŸ’©πŸ˜‹.

If you want to do clean, normal, environment-friendly and human-intelligent SSL/TLS interception, just do what you did before: pass all traffic through a proxy with a certificate authority that signs certificates dynamically (all proxies know how to do this, be it Bluecoat, Ironport or Zscaler) and deploy the public part of this certificate authority in the certificate store of your workstations, servers (which should not directly access the Internet in party mode), your smartphones... as a trusted root authority.

Here is a documentation from Symantec on "ethical" interception πŸ˜‡:

On the other hand, you won't be able to put IDS/IPS on your infrastructure exposed to the Internet with a traffic replication (TAP) to decrypt it without being in cut (except using eTLS but I don't tell you the gas factory). Frankly, the interest of an IDS/IPS in this case seems to me very limited if you respect the good practices (update, partitioning, audits...) and if you have for example a WAF or equivalent carrying the encryption (or if it is carried before, as for example with a CDN).

TLS 1.3 is therefore a very good protocol but still had two weaknesses:

  • To connect to a service, you have to resolve the domain name, which is done with the DNS protocol, which is not encrypted (No, DNSSEC does not encrypt DNS but only ensures that the integrity of the response has not been altered);
  • The domain name we are trying to reach, located in the SNI field of TLS, is not encrypted, because it is present in the first client request, before the establishment of an encrypted channel. This information alone (the domain name) is sufficient to carry out espionage on a WiFi network or on a state scale, as well as to censor. Fortunately, ESNI allows us to solve this problem, which I will detail next.

Trusted Recursive Resolver / TRR

Before talking about DNS over HTTPS, we just need to introduce a simple notion: trusted DNS resolvers (resolvers that lie are unfortunately frequent, without necessarily talking about hacking). Basically, several browser editors have partnered with companies like CloudFlare to create domain name resolution services with the guarantee that they won't modify the answers. Thus the browser, which previously used the DNS server configured in the operating system, can dispense with it and directly query trusted DNS resolution services.

This is simply a whitelist of trusted servers that act as a relay for DNS queries. They then relay the DNS request to the appropriate party.

In fact... there are two πŸ˜‰: (

I quickly pass over the fact that these trusted servers allow (partial) geolocation, useful for CDNs and, ideally, it is the server closest to the user that is used (with a classic CDN type operation).

DNS over HTTPS / DoH

This protocol, described in RFC 8484, requires support for HTTP/2 and its streams in order to avoid losing too much response time.

It is an encapsulation of DNS in HTTP over TLS. It is therefore the content of a classic DNS request that is sent in HTTP, encoded in base64 in the case of GET requests and without encoding in the case of POST requests.

Here is a tool in Perl (sorry) doing this type of request:

Otherwise, there is CURL (in recent version):

~# curl -doh-url

You will tell me that in order to be able to perform this domain name resolution on HTTPS, you must first perform a classic DNS query in order to obtain the IP address corresponding to the TRR server, which is not encrypted and would be like the chicken and egg problem, but in the end it is only the resolution of the DNS server, which does not leak any information about your real DNS queries. To have a perfect solution, you would have to hard-code the IP addresses of the servers, which seems unfeasible.

Encrypted Server Name Indication / ESNI

For every problem, there is a solution, so it is once again an extension of TLS that has solved the problem of domain names in clear text when connecting to a service: Encrypted Server Name Indication.

The host or company wishing to use ESNI must have a DNS record containing a data structure with a public key in particular. From this public key is derived a symmetric key used to encrypt the domain name in the request.

Note that this potential future standard is still in draft form:

For example, here is the DNS record for CloudFlare (the data structure in red is still in base64):

~# dig TXT +short


For the details, I found few source codes detailing the breakdown of the structure, here is an example in python:

Since the goal is to hide the name of the visited site, it is strongly recommended to use one key for many services and not one per service. As you can see, this feature is especially useful and advocated by large hosting companies and CDNs like CloudFront. Here is an article from CloudFlare on the subject:

TRR, DoH, ESNI... All this greatly complicates the kinematics of connecting to a website and relies on few actors, but fortunately it is still possible to work with the old model πŸ˜€.

0-RTT and packet replay

Due to the optimization of the TLS exchange, it is possible to replay the first TLS packet sent, provided that the attacker is able to intercept the traffic (WiFi...) :

On the client side, the browser will report a network error, transparent to the user because it is managed by the browser which will replay the request; On the server side, this specific request will be seen twice. In fact, it is possible to replay any TLS packet:

The risks are limited because the cases of exploitation are very rare and most web applications add unique identifiers that cannot be replayed for sensitive requests such as transfers or payments.

As the risk is not zero, some CDNs like CloudFlare only respond to certain 0-RTT requests such as GETs without parameters and add a specific HTTP header: "Cf-0rtt-Unique: value-unique-liant-and-session-key-and-nego-Tls". On the other hand, for the other packets, nothing 😱.


As you can see, TLS 1.3 corrects many weaknesses of the previous versions and marks an important change with the abandonment of a backward compatibility that had become annoying.

Ideally, you should only allow TLS 1.3 on all your services, but in order to avoid blocking certain non-compatible clients or tools, it is preferable to continue to allow TLS 1.2, or even 1.1 in certain cases.

If you have 47 minutes, I invite you to watch this presentation in English at the 2017 SSTIC conference: and the slides

Blog: regreSSHion, critical vulnerability on OpenSSH CVE-2024-6387

Blog: CaRE program: healthcare facilities close the cybersecurity gap with Patrowl