Analyzing .NET Core memory on Linux with LLDB

Most of the last week I’ve been experimenting with our .NET Windows project running on Linux in Kubernetes. It’s not as crazy as it sounds. We already migrated from .NET Framework to .NET Core, I fixed whatever was incompatible with Linux, tweaked here and there so it can run in k8s and it really does now. In theory.

In practice, there’re still occasional StackOverflow exceptions (zero segfaults, however) and most of troubleshooting experience I had on Windows is useless here on Linux. For instance, very quickly we noticed that memory consumption of our executable is higher than we’d expect. Physical memory varied between 300 MiB and 2 GiB and virtual memory was tens and tens of gigabytes. I know in production we could use much higher than that, but here, in container on Linux, is that OK? How do I even analyze that?

On Windows I’d took a process dump, feed it to Visual Studio or WinDBG and tried to google what’s to do next. Apparently, googling works for Linux as well, so after a few hours I managed learn several things about debugging on Linux and I’d like to share some of them today.

The playground (debugging starts later)

Obviously, I can’t use our product as an example, but in reality any .NET Core “Hello world” project would do. I’ll create Ubuntu 16.04 VM with the help of Vagrant and VirtualBox, put the project in it and we can experiment in there.

That’s actually quite a lot: ~2.6 GiB of virtual memory and ~238 MiB of physical. Even though virtual memory doesn’t mean we’re ever going to use all of it, process dump (‘core dump’ in linux terminology) will take at least the same amount of space.

The simplest way to create a core dump is to use gcore utility. It comes along with gdb debugger and that’s the only reason I had to install it.

Using gcore, however, in most cases requires elevated permissions. On local Ubuntu I was able to get away with sudo gcore, but inside of Kubernetes pod even that wasn’t enough and I had to go to underlying node and add the following option to sysctl.conf:

But here in Ubuntu VM sudo gcore works just fine and I can create a core dump just by providing target process id (PID):

Create core dump

Shell

1

2

3

sudo gcore4058

# ...

# Saved corefile core.4058

As I mentioned before, dump file size is the same as the amount of virtual memory:

Dump size

Shell

1

2

3

ls-lh

#total 2.6G

#-rw-r--r-- 1 root root 2.6G Dec 12 04:25 core.4058

This actually was a problem for us in Kubernetes, with .NET garbage collector switched to server mode and the server itself having 208 GiB of RAM. With such specs and GC settings virtual memory and core dump file were just above 49 GiB. Disabling gcServer option in .NET, however, reduced default address space and therefore core file size down to more manageable 5 GiB.

But I digressed. We have a dump file to analyze.

Debugger and .NET support

We can use either gdb or lldb debuggers to works with core files, but only lldb has .NET debugging support via SOS plugin called libsosplugin.so. Moreover, the plugin itself is built against specific version of lldb, so if you don’t want to recompile CoreCLR and libsosplugin.so locally (not that hard), the safest lldb version to use at the moment is 3.6.

As a side note, I was wondering what SOS exactly means and found this wonderful SO answer. Apparently, SOS has nothing to do with ABBA or save-our-souls Morse code distress signal. It means “Son of Strike”. Who is Strike, you might ask? Strike was a name of debugger for .NET 1.0, codename Lightning. Strike of Lightning, you know. And SOS is his proud descendant. Whenever I doubt if I should still love my profession, I find a story like this and give it another year. Few years ago a story behind a userAgent browser property did the same trick.

OK, we have a debugger, an executable and a core dump. Where do we get SOS plugin? Fortunately, it comes along with .NET Core SDK which I already installed:

Find libsosplugin.so

Shell

1

2

find/usr-name libsosplugin.so

#/usr/share/dotnet/shared/Microsoft.NETCore.App/2.0.0/libsosplugin.so

Finally, we can start lldb, point it to dotnet executable, which started our application, it’s core dump and then load the plugin:

Memory summary

We have a working debugger, we have a DumpHeap command – let’s take a look at managed memory statistics:

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

(lldb)sos DumpHeap-stat

#Statistics:

# MT Count TotalSize Class Name

#00007f6d32992aa8 1 24 UNKNOWN

#00007f6d329911d8 1 24 UNKNOWN

#....

#00007f6d323defd8 4 17528 System.Object[]

#00007f6d323e08a8 25 40644 System.Int32[]

#00007f6d323e0168 29 82664 System.String[]

#00007f6d323e3440 335 952398 System.Char[]

#000000000223b860 10092 6083604 Free

#00007f6d3242b460 150846 204845172 System.String

#Total 161886 objects

(lldb)

Not surprisingly, System.String objects use the most of the memory. Btw, if you summarize total sizes of all managed objects (like I did), resulting memory count comes very close to physical memory count reported by ps u. 202 MiB of managed objects vs 238 MiB of physical memory. The delta, I suppose, goes to the code itself and executing environment.

Memory details

But we can go further. We know that System.String uses the most of the memory. Can we take a closer look at those strings? Sure thing:

Drill down into the memory

Shell

1

2

3

4

5

6

7

8

9

10

11

(lldb)sos DumpHeap-typeSystem.String

# Address MT Size

#00007f6d0bfff3f0 00007f6d3242b460 26

#00007f6d0bfff4c0 00007f6d3242b460 42

#...

#00007f6d0c099ab0 00007f6d3242b460 20056

#00007f6d0c09e920 00007f6d3242b460 20056

#...

#00007f6d323e0168 29 82664 System.String[]

#00007f6d3242b460 150846 204845172 System.String

#Total 150895 objects

-type works as a mask, so the output also contains System.String[] and a few Dictionaries. Also strings vary in size, whereas I’m actually interested in large ones, at least 1000 bytes:

Use filter

Shell

1

2

3

4

5

6

sos DumpHeap-typeSystem.String-min1000

# ...

# 00007f6d0e8810f0 00007f6d3242b460 20056

# 00007f6d0e885f60 00007f6d3242b460 20056

# 00007f6d0e88add0 00007f6d3242b460 20056

# ...

Having the list of suspicious objects we can drill down even more: examine the objects one by one.

DumpObj

DumpObj can look into the managed object details at given memory address. We have a whole first column of addresses and I just picked one of them:

It’s actually pretty cool. We immediately can see the type name (System.String) and what fields it is made of. I also noticed that for small strings we’d see the value right away (line 7), but not for the large ones.

I was puzzled at first about how to get the value for those. There’s m_firstChar field, but is it like a linked list or what? Where’s a pointer to the next item? Only after checking out the source code for System.String I realized that m_firstChar can be used as a pointer itself and the whole string is stored somewhere as continuous block of memory. This means I can use lldb’s native memory read command to get the whole string back!

For that I just need to take object’s address (00007f6d0e8810f0), add m_firstChar‘s field offset (c, third column in fields table) and then do something like this:

Does it look familiar? “R.a.n.d.o.m. .s.t.r.i.n.g.”. C# char defaults to UTF16 encoding and therefore it takes two bytes. Even though one of them is always zero for ASCII characters.

We also can experiment with memory read formatting, but even with default settings we can get the idea what’s inside.

Formatting options

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

(lldb)memory read00007f6d0e8810f0+0xc-fs-c13

#0x7f6d0e8810fc: "R"

#0x7f6d0e8810fe: "a"

#0x7f6d0e881100: "n"

#0x7f6d0e881102: "d"

#0x7f6d0e881104: "o"

#0x7f6d0e881106: "m"

#0x7f6d0e881108: " "

#0x7f6d0e88110a: "s"

#0x7f6d0e88110c: "t"

#0x7f6d0e88110e: "r"

#0x7f6d0e881110: "i"

#0x7f6d0e881112: "n"

#0x7f6d0e881114: "g"

Conclusion

I’m just scratching the surface, but I love what I find. I’ve been a .NET programmer for quite a while, but it’s the first time in years when I started to think what’s happening that deep under the hood. What’s inside of a System.String? What fields does have? How those fields are aligned in the memory? The first field has an offset 8. What’s in those eight bytes? A type id? .NET strings are interned, does it mean that m_firstChar of identical strings will point to the same block of memory? Can I check that?

I also wonder how debugging .NET code with lldb looks like. Many years ago I used to debug a C++ pet project with gdb, so I kind of know the feeling. But .NET applications compile Just-In-Time, so it’s interesting to see how SOS plugin deals with that.

If you haven’t changed default core dump settings, then most likely you won’t find one. Basically, you need to configure 2 things in order to enable auto crash dumps:
1) set core_pattern – file name template for future core dumps
2) set core dump file limit, which at least on Ubuntu is “0” by default and therefore crush dumps are disabled.
Here’s a good place to start: https://sigquit.wordpress.com/2009/03/13/the-core-pattern/

Hey,
Sorry, can’t answer faster.
I won’t pretend that I know all of that, but if even after changing core_pattern and changing core dump size limit it still doesn’t work for you, I’d bet on core size limit still being zero.
Here’s what you can try. Create a small .net app with the only instruction in its Main() method: throw new Exception("OK, now you _must_ produce a dump");, build it and run with the following command:echo "/tmp/core-%e-%s-%u-%g-%p-%t" | sudo tee /proc/sys/kernel/core_pattern && ulimit -c unlimited && dotnet run
It configures automatic core dumps right before runs the app, so it’s hard to go wrong here. I just run it on my clean, never previously configured ubuntu and it produced a core dump in /tmp folder – /tmp/core-dotnet-6-1000-1000-1504-1532836438. I looked inside, and that indeed was hardcoded unhandled exception:(lldb) pe
Exception type: System.Exception
Message: OK, now you _must_ produce a core dump
InnerException:
StackTrace (generated):
SP IP Function
00007FFDB3264850 00007F7520D016FE ThrowsException.dll!Unknown+0x6e

StackTraceString:
HResult: 80131500

When you able to repeat that, you could try doing the same for your main app.
Alternatively, you might try running another version of dotnet. They do make mistakes and e.g. older/newer versions might not have it. Also make sure that you are using the latest SDK and runtime. Today, the latest runtime is 2.1.2, and SDK is 2.1.302. Depending on how you’re producing the build, latest SDK might not trigger using the latest available runtime, so you might need to either update .csproj file or do dotnet publish instead of build. Latter triggers using the latest available runtime.
Good luck.