Building a TFTP Server with Senders/Receivers.
November 18, 2025
I’ve recently been exploring the use of sender/receiver patterns in C++ to implement non-blocking servers without the use of libraries like ASIO. I initially began by implementing a simple asynchronous sockets library built on stdexec but felt that a full-fledged network application would be more enlightening. After considering a few different possibilities, I decided to implement a server for the trivial file transfer protocol (TFTP).
Currently, it goes by the working title of tftpd and can be downloaded from github.
What is TFTP? Why choose it?
TFTP is a file transfer protocol first specified by RFC 783 in 1981 and later updated by RFC 1350 in 1992. The protocol was designed specifically with simplicity in mind, and so is often chosen to implement file transfers in resource-constrained environments. Notably, it’s used to retrieve the bootloader and kernel files when PXE booting. It’s also used in embedded systems where you may not have a full TCP/IP stack available. TFTP’s simple architecture makes it ideal for implementing a network server from scratch using senders.
Application Architecture
The core server architecture is provided by a CRTP class that specializes a UDP server using the CRTP pattern.
// New messages received by the udp_base are propagated up to the server via the
// `service` method. The server specialization demultiplexes TFTP sessions and
// dispatches messages to the correct application handler.
class server : public udp_base<server>
{
public:
// Dispatches TFTP messages to the correct handler.
auto tftp_route(...) -> void;
// Services incoming TFTP messages.
auto service(...) -> void;
private:
// Stores TFTP sessions.
sessions_t sessions_;
// RRQ, ACK, WRQ, and DATA message handlers.
auto rrq(...) -> void;
auto ack(...) -> void;
auto wrq(...) -> void;
auto data(...) -> void;
};
The UDP service base defines a UDP socket server that provides
useful methods like submit_recv for re-initiatin the asynchronous read loop.
Members of the net::service namespace are provided by my network utilities
library cppnet.
template <typename UDPStreamHandler>
using udp_base = net::service::async_udp_service<UDPStreamHandler, BUFSIZE>;
Then in main we simply start the TFTP server.
auto main(int argc, char *argv[]) -> int
{
// Namespace provided by async-berkeley.
using namespace io::socket;
auto server = tftp_server();
// async-berkeley sockets library abstraction
// for ipv6 socket addresses.
auto address = socket_address<sockaddr_in6>{};
address->sin6_family = AF_INET6;
address->sin6_port = htons(conf->port);
server.start(address);
// Server transitions from PENDING to STARTED.
server.state.wait(server.PENDING);
// Server transitions from STARTED to STOPPED.
server.state.wait(server.STARTED);
return 0;
}
Fitting in Senders and Receivers
TFTP server responses are handled by async-berkeley, my sender/receiver socket io library and are implemented using a pattern that is similar to how one would use ASIO callbacks:
auto server::send_data(async_context &ctx, const socket_dialog &socket,
iterator_t siter) -> void
{
using namespace stdexec;
auto &[key, session] = *siter;
auto &buffer = session.state.buffer;
auto span = std::span(buffer.data(),
std::min(buffer.size(), messages::DATAMSG_MAXLEN));
sender auto sendmsg =
io::sendmsg(socket, socket_message{.address = {key}, .buffers = span},
0) |
then([](auto &&) {}) | upon_error([](auto &&) {});
ctx.scope.spawn(std::move(sendmsg));
}
note that because the TFTP server must be capable of retrying the send of DATA messages, I have associated the message write buffer with the session state. We do a similar thing for the sending of ACK messages:
auto server::send_ack(async_context &ctx, const socket_dialog &socket,
iterator_t siter) -> void
{
using enum messages::opcode_t;
using namespace stdexec;
using socket_message = io::socket::socket_message<sockaddr_in6>;
auto &[key, session] = *siter;
auto &buffer = session.state.buffer;
auto &block_num = session.state.block_num;
buffer.resize(sizeof(messages::ack));
auto *ack = reinterpret_cast<messages::ack *>(buffer.data());
ack->opc = htons(ACK);
ack->block_num = htons(block_num);
sender auto sendmsg =
io::sendmsg(socket, socket_message{.address = {key}, .buffers = buffer},
0) |
then([](auto &&) {}) | upon_error([](auto &&) {});
ctx.scope.spawn(std::move(sendmsg));
}
Unlike an implementation using ASIO callbacks, both of these functions take a mutable reference to an asynchronous context.
auto server::send_data(async_context &ctx, const socket_dialog &socket,
iterator_t siter) -> void;
auto server::send_ack(async_context &ctx, const socket_dialog &socket,
iterator_t siter) -> void;
Among other properties, this context holds an asynchronous scope as specified in P3149. Asynchronous scopes are how the senders and receivers framework manages the lifetime of asynchronous operations and they have the peculiar but useful proeprty of raising an assertion error if the lifetime of the asynchronous scope is shorter than the lifetime of the operations nested inside the scope. So for instance
#include <stdexec/execution.hpp>
#include <exec/async_scope.hpp>
using namespace stdexec;
auto main() -> int
{
sender auto async_op = async_operation();
{
auto scope = exec::async_scope();
scope.spawn(std::move(async_op));
}
return 0;
}
this snippet will raise an assertion error if async_op doesn’t complete immediately.
Impressions
Overall, senders and receivers seem relatively pleasant to work with and I’ve been impressed with the initial performance benchmarks compared to ASIO. It seems the templated operation approach is much more effectively optimized by the compiler than the implementation approach taken by ASIO. The library on its own is still very bare-bones though, and doesn’t seem like it won’t be very productive to work with until a library ecosystem springs up around it.