How Modern Encryption Standards May Impact Your Security Strategy
As modern encryption continues to evolve, so do security strategies centered around monitoring, detecting, and stopping malicious traffic. With Transport Layer Security (TLS) version 1.3 becoming the next thing, how might your security program be impacted? According to the Internet Engineering Task Force, about 30% of the current traffic by the major web browsers is utilizing TLS 1.3. TLS version 1.3 brings several improvements over previous versions, including increased performance and mandatory Perfect Forward Secrecy, among other features. Understanding encryption standards by reading the RFC can be daunting, so let us demystify version 1.3 and talk about how it could impact your organization.
There are many solid reasons a security program may want to inspect or decrypt TLS traffic, but according to a Gartner report published in December of 2019, some of the top reasons are: Network Traffic Analysis (NTA), data exfiltration detection (DLP), application protection (WAF), or application and network performance monitoring. But as we will see, this sort of inspection is impacted by the type of encryption used.
The previous TLS standard, version 1.2, brought with it an optional feature called Perfect Forward Secrecy (PFS). PFS did not allow for static keys, and instead used temporary and ephemeral keys. This was a huge leap in encryption technology and ensured that even if the private key of a server were disclosed, a session key would not be useful to an attacker. This vastly reduced the possibility of Man-In-The-Middle (MITM) attacks. While this was a excellent feature, it also changed the ways in which traffic could be monitored by security tools. In TLS 1.3, PFS is mandatory.
Typically, security tools that monitor the contents of encrypted traffic can use a passive method. This passive method normally involves using a port span or traffic aggregator to make a copy of all traffic to send to the monitoring tool. The private keys of local servers are present within the monitoring tool which allows the tool to decrypt the sessions without being inline to the traffic. This method is crucial to many organizations because it allows for bulk decryption of traffic at a single point of monitoring. In previous versions of TLS, this worked well. In TLS 1.3, this will no longer work at all. A new strategy will be needed to allow for decryption at the point of termination and will require using a forward or reverse proxy or some type of agent that can relay the session keys to the decryptor tool. For mature organizations that have established robust bulk decryption strategies, this will require a large-scale change. Compliance initiatives that require traffic inspection could also be impacted, such as PCI-DSS.
The change to enforce PFS comes by the allowed Key Exchange Algorithms within the TLS 1.3 standard. The previous static RSA and static Diffie-Hellman are no longer allowed, and the newer Diffie-Hellman Ephemeral (DHE) and elliptical key exchange (ECDHE) are now required. In addition to this, the number of cipher suites that the protocol supports has been reduced from 37 to 5. This greatly simplifies figuring out which suite clients and servers will negotiate on. Lastly, the minimum hashing size allowed in version 1.3 is SHA-256.
These changes are mostly wins for the security of communication but can impact security strategies that are already in place. Support for newer cipher suites, stronger hashes, and new decryption techniques will need to be adopted to maintain the same functionality and visibility. With legacy systems this could be challenging.
But that is not all TLS 1.3 brings to the table. It also brings performance enhancements that are seen as a “double-edged sword” to some. The big performance improvement in TLS 1.3 comes via a change to Round-trip Time (RTT). Without getting into the technical weeds on RTT, TLS 1.3 reduces the number of round trips required to form an encrypted session to a single trip. This frees up resources and reduces connection latency. In addition to reducing RTT, another feature revolves around something called 0-RTT where a pre-shared key is used to resume a session. This allows for a faster encrypted session, but critics claim this also presents some security risks since 0-RTT does not utilize PFS and is susceptible to replay attacks.
With all this in mind, what is the best way to go about evolving along with the new standard? One proposed model is “Decrypt Once, Inspect Many” or DOIM. This involves creating a “decryption zone” where the traffic is decrypted once and then sent to multiple inspection tools or a chain of security devices within the zone. If passive inspection is absolutely needed, then look for products that can relay the session keys to your decryption appliances. These products may come in the form of server-based agents or an Application Delivery Controller (ADC). Further difficulties may be presented when TLS 1.3 is combined with newer HTTP security features such as HTTP public key pinning (HPKP) and secure DNS, which could cause difficulties with newly required proxies, so all this will need to be taken into account.
As we move forward with creating and standardizing on stronger encryption protocols, we must gauge the impact it has on security teams who rely on monitoring this traffic to keep organizations safe. Understanding how these new standards work can help you plan how your security monitoring strategy might have to evolve.