Quantcast
Channel: Symantec Connect - ブログエントリ
Viewing all articles
Browse latest Browse all 5094

Heartbleed – How Did Internet Security Almost Bleed Out?

$
0
0

Today marks the one month anniversary of the devastating Heartbleed vulnerability. Specifically, one month ago today Google first notified the OpenSSL development team of the vulnerability. From the start CVE-2014-0160 was not just another software vulnerability. No, this one was big. A vulnerability of epic proportion. Who would've thought that a simple buffer over-read could threaten to undermine the security of the Internet?  As you know by now, Heartbleed allows attackers to read 64KB of server memory. What exactly is contained in that 64KB of server memory? Well that's a little random. Depending on the location of the heartbeat payload within server memory, the leak could reveal cryptographic keys, usernames and passwords, email messages, and a multitude of other sensitive information. How could this possibly happen? Looking back, a series of cascading failures is to blame.

  • Let's start with the TLS Heartbeat Extension protocol defined in RFC 6520. The TLS Heartbeat Extension protocol is designed to maintain and verify a TLS connection without the need to renegotiate the connection every time. The client sends heartbeat payload to the server, and the server responds with the exact same heartbeat payload in order to verify the connection. But why was the heartbeat payload designed as a variable length field? And why would the heartbeat payload possibly need to be a whopping 64KB in length? Wouldn't a fixed length field of 64 bytes have been more than sufficient?  Or was the heartbeat payload designed to covertly transfer Tolstoy's War And Peace? Defining a fixed length heartbeat payload field of 64 bytes would've simplified the application code and likely prevented the Heartbleed vulnerability. Ironically the "Security Considerations" section of RFC 6520 states that "this document does not introduce any new security considerations." Oops.
  • What about the programmers? OpenSSL development is "volunteer-driven", and is performed by a staff of eight programmers. The developers perform an incredible service to the Internet at large, providing critical software that is used to secure electronic commerce, financial transactions, and everything else that must be encrypted over the World Wide Web. Recently a consortium of more than a dozen major technology corporations consisting of Amazon, Cisco, Dell, Facebook, Fujitsu, Google, IBM, Intel, Microsoft, NetApp, Rackspace, Qualcomm and VMware pledged $100,000 per year for the next three years to help fund open source projects such as OpenSSL. Will this help solve the problem? Yes. Will this solve the problem completely? No. Technology corporations boast an impressive stable of well-paid developers, yet critical vulnerabilities are still identified within commercial software at an alarming rate. As long as programmers are human, mistakes will be made and critical vulnerabilities will be introduced into application code.
  • What about the programming language? Like many open source software components, OpenSSL is written in the C programming language. One of the reasons that the C programming language is so powerful is because of direct memory management. C memory allocation and pointers allow programmers incredible control over program execution. Unfortunately, these very same features make the C programming languages extremely dangerous. Common C programming mistakes can lead to critical vulnerabilities such as buffer overflows and, in the case of the Heartbleed vulnerability, buffer over-reads.
  • What about the application code? The vulnerable code was introduced with OpenSSL 1.0.1 on March 14, 2012. Depending on whether TLS or DTLS was utilized, the vulnerable code was located within the "tls1_process_heartbeat()" function of the "t1_lib.c" file or the "dtls1_process_heartbeat()" function of the "dl_both.c" file, respectively. Let's consider the "tls1_process_heartbeat()" function of the vulnerable "t1_lib.c" file. The function is called with an SSL data structure passed by reference:
    2437 tls1_process_heartbeat(SSL *s)

    Later the "p" variable is initialized as a pointer to the heartbeat request, and the purported payload length is read from "p" into "payload":

    2446 n2s(p, payload);

    Note that the actual payload length is never verified. The next line initializes the "pl" variable as a pointer to the payload:

    2447 pl = p;

    Later "pl" is copied into "bp", a pointer to "buffer":

    2469 memcpy(bp, pl, payload);

    Finally "3 + payload + padding" bytes of "buffer" are transmitted to the client:

    2474 r = ssl3_write_bytes(s, TLS1_RT_HEARTBEAT, buffer, 3 + payload + padding);

    Because the actual length of the payload received from the client is never verified, the client can send a single byte of payload but specify a payload length of 65,536 bytes, triggering the Heartbleed vulnerability and leaking 65,535 bytes of data stored within server memory. RFC 6520 actually states that the payload length must not exceed 2^14 bytes, but the payload length is stored in a 16-bit integer and this restriction is not enforced, so 2^16 bytes can be extracted from server memory. Worse yet, in order to improve performance OpenSSL developers utilized a custom freelist implementation instead of the standard "malloc()" and "free()" memory allocation functions.  Consequently, the memory returned by the server is more likely to contain sensitive information. The patched version of the previously vulnerable "t1_lib.c" file adds proper bounds checking in order to prevent the buffer over-read and therefore eliminate the Heartbleed vulnerability. If the actual length of the payload received from the client is greater than the purported payload length, the heartbeat response is not sent:

    2601 if (1 + 2 + payload + 16 > s->s3->rrec.length)
    2602 return 0; /* silently discard per RFC 6520 sec. 4 */
  • What about disclosure? What a mess! According to the timeline compiled by Fairfax Media, Google first identified the Heartbleed vulnerability on or before March 21. However, Google did not report Heartbleed to the OpenSSL development team until April 1. Heartbleed was next identified by Finland's Codenomicon on April 2. However, Codenomicon did not report Heartbleed to the OpenSSL development team until April 7, although Codenomicon did report the vulnerability to the National Cyber Security Centre Finland on April 3. Upon learning that a second researcher had identified the Heartbleed vulnerability, the OpenSSL development team released a security advisory and patched software later the same day. In between the initial Google discovery on March 21 and the patched software released on April 7, several companies including Google, Facebook, and Akamai were notified of the vulnerability and shrewdly disabled the TLS Heartbeat Extension. However, other companies including Cisco, Yahoo, and Twitter were not notified and therefore were unable to disable the TLS Heartbeat Extension. Who else knew about the Heartbleed vulnerability since it was introduced with OpenSSL 1.0.1 on March 14, 2012? How did two separate researchers identify Heartbleed 12 days apart after the vulnerability lingered within the OpenSSL code for over two years? Why the bumpy vulnerability disclosure timeline? Suffice it to say that the Heartbleed vulnerability did not set the standard for responsible vulnerability disclosure.
  • What about security awareness? Finally a bright spot! On April 5, Codenomicon purchased the Heartbleed.com domain, where it published details regarding the vulnerability on April 7. The information was thorough and well written, and the clever Heartbleed logo resonated with the media and Internet users alike:

    Heartbleed Logo

    The Heartbleed vulnerability was all over the news. Sites like Wikipedia and XKCD did a fantastic job explaining the vulnerability to non-technical Internet users. Mashable compiled a list of passwords that needed to be changed immediately. And a myriad of sites allowed you to test arbitrary servers for the presence of the Heartbleed vulnerability. All things considered, Heartbleed security awareness was handled in an exemplary manner.

So what now? Can we guarantee that Heartbleed will never happen again? No. Application code is still written by humans, so mistakes will be made. They are inevitable. However, it is crucial that the technology industry learns from Heartbleed in order to improve processes surrounding protocol design, software development, and vulnerability disclosure. Only then can the technology industry stop a series of cascading failures from resulting in another devastating security vulnerability.


Viewing all articles
Browse latest Browse all 5094

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>