From Nostalgia to Code: Building a MU Online Server in Rust

18 February 2026
From Nostalgia to Code: Building a MU Online Server in Rust
Building a MU Online Server in Rust (1 Part Series)
A nostalgia fueled deep dive into reverse engineering the MU Online binary protocol, using OpenMU as a reference to build a working game server from scratch in Rust.

Somewhere between 2005 and 2009, my younger self spent almost all the time in front of a computer playing MU Online, one of the best MMORPGs in history if you ask me. Cybercafes with friends, deploying private servers from guides I barely understood, staying up way too late. Even many years later, every now and then the nostalgia kicks in and I end up playing again with friends to relive the good old times.

mu-castle

Last year, out of pure curiosity, I was looking on GitHub for MU Online projects and found a very interesting repository: OpenMU. These guys created a customizable server for MU Online using C# from scratch. That is amazing, hats off to them. So I thought, why not use this repo as a reference to write a server in Rust? My intention isn't to fully rewrite their project. I just want to learn how one of the best games of my childhood works under the hood and have some fun in the process.

I started by cloning their project and analyzing the repo to understand how everything works, then trying to replicate minimal features. How much will I replicate? I don't know, we are going to discover that together I guess.

After an initial inspection, the project has a docs folder with very useful information that the team published describing the project, architecture, intentions, and other valuable details for what we are going to do.

Important

The repo has multiple references to the version/protocol of the current implementation being the ENG (English version) of Season 6 Episode 3. Being honest, I don't remember too much about these versions and don't fully understand at this point how they changed. My last version that I played was 0.99 (the best version if you ask me, but biased haha). For now, we are going to focus on the latest one compatible with the Season 6 client.

One of the key docs is the architecture overview, which gives us a brief idea about how this implementation works. Again, we are not going to implement everything, but it's nice to have an idea about where to start.

mu-openmu-arch

The other key resource I found is the folder with the packet descriptions, which documents the messages exchanged between client and server. With this, we can get a solid idea of the protocol that MU Online implements. We are definitely going to need and spend time here to implement some minimal features.

Note

In this context, a packet is a structured sequence of bytes that represents a single message in the MU Online protocol. Each packet has a header that identifies what kind of message it is, how long it is, and whether it's encrypted, followed by an optional payload with the actual data. Think of it as an envelope: the header is the addressing and stamp, and the payload is the letter inside.

They already documented the list of which packets are being used in the communication between server and client and vice versa, so pretty nice to have.

One of the key discoveries here is that the protocol utilizes the first byte as a Frame Identifier. As described in the PacketTypes.md documentation, this byte defines the binary contract between the client and server, specifically governing the framing logic (how to calculate packet length) and the cryptographic state (whether XOR32 or Simple Modulus is applied).

Note

Framing is the process of determining where one message ends and the next one begins in a stream of bytes. Since TCP delivers data as a continuous byte stream with no built-in message boundaries, the protocol needs a way to delimit individual packets, that's what the frame identifier and the length fields do.

| First byte | Length of the packet                   | Encrypted Server -> Client | Encrypted Client -> Server   |
| ---------- | -------------------------------------- | -------------------------- | ---------------------------- |
| 0xC1       | Specified by the second byte           | No                         | Yes (XOR32)                  |
| 0xC2       | Specified by the second and third byte | No                         | Yes (XOR32)                  |
| 0xC3       | Specified by the second byte           | Yes (Simple modulus)       | Yes (Simple modulus + XOR32) |
| 0xC4       | Specified by the second and third byte | Yes (Simple modulus)       | Yes (Simple modulus + XOR32) |

So basically, the first byte is the frame identifier as we mentioned. It controls two things: how you read the length, and whether the payload is encrypted. The first byte tells the packet parser how to read the rest of the data.

C1 vs C2 is about size capacity. C1 stores the packet length in a single byte (byte index 1), so packets can be at most 255 bytes. C2 uses two bytes for length (indices 1-2), supporting up to 65,535 bytes.

C3 vs C4 mirror C1/C2 in their length encoding, but add Simple Modulus encryption on top. I guess they're used for sensitive data, anything worth protecting from packet sniffing.

Encryption is Asymmetric by Direction

This is a critical detail as we can see in the table:

  • Server -> Client: C1/C2 are plaintext. C3/C4 use Simple Modulus.
  • Client -> Server: Everything gets XOR32 encrypted, and C3/C4 additionally get Simple Modulus + XOR32.

So the client always encrypts outbound traffic (at minimum XOR32), while the server only encrypts when using C3/C4. This makes sense, sniffing what the server sends is less dangerous than injecting forged client packets.

What is XOR32

XOR is a bitwise operation: compare two bits, output 1 if they differ, 0 if they match.

1 XOR 1 = 0
1 XOR 0 = 1
0 XOR 1 = 1
0 XOR 0 = 0

XOR32 means we have a 32-byte key (a fixed sequence of 32 bytes, I originally thought it was 32 bits but after reading the implementation I figured out it's the key length). To encrypt, we XOR each byte of our data with the corresponding byte of the key, cycling back to the start of the key every 32 bytes.

So it will be something like:

Data:    [A] [B] [C] [D] ...
Key:     [K0][K1][K2][K3] ...  (repeats every 32 bytes)
Result:  [A^K0] [B^K1] [C^K2] [D^K3] ...

The beauty of XOR is that applying it twice reverses it. Encrypt and decrypt are the same operation with the same key. That's why it's fast and simple. The weakness: if someone knows (or brute forces) the key, it's trivially broken. It's not real security, it's simple obfuscation.

We won't implement XOR32 in this article since the client sends the only payload packet unencrypted for this scope, but we'll tackle it in the next one when we handle login. If you're curious, you can play with a basic XOR32 implementation in the Rust playground.

Simple Modulus (used by C3/C4 packets) is a custom block-based encryption algorithm that deserves its own article, so we'll skip it for now since our initial handshake scope doesn't require it. If you're curious, the OpenMU creator has two great posts about it: A closer look at the MU Online packet encryption and SimpleModulus revisited.

C1 00 01 - The Initial Hello!

Thanks to the great docs and the implementation that the OpenMU team provides, we can understand the initial flow of the communication between the client and the servers: the iconic login screen.

The Hello by server packet describes this initial interaction. The packet is sent by the server after the client connects, and the client will request the server list as soon as it receives the hello packet.

Inspecting the packet:

| Index | Length | Data Type | Value | Description                                     |
| ----- | ------ | --------- | ----- | ----------------------------------------------- |
| 0     | 1      | Byte      | 0xC1  | [Packet type]                                   |
| 1     | 1      | Byte      | 4     | Packet header - length of the packet            |
| 2     | 1      | Byte      | 0x00  | Packet header - packet type identifier          |
| 3     | 1      | Byte      | 0x01  | Packet header - sub packet type identifier      |

Dissecting it:

  • 0xC1 -> Small packet, no encryption server->client, length in byte 1
  • 0x04 -> Total packet length is 4 bytes (header only, no payload)
  • 0x00 -> Main packet type identifier
  • 0x01 -> Sub-type

After inspecting the packet definition headers and the implementation, we can infer that the first byte indicates whether the packet is small (C1/C3) or large (C2/C4). Next comes the one or two byte length field, followed by a byte that serves as a packet or group identifier (for example, the connection or handshake family). And finally, some groups include a subtype byte, such as the hello packet, while others do not. So the Header Size varies (3, 4, or 5 bytes) depending on the first byte and the type/sub-type.

The two-level identification scheme (byte 2 or 3 = group/code, byte 3 or 4 = sub type/code) gives us a clean routing architecture. 0x00 is the connection group, and within it 0x01 means "hello." Think of it like namespace::method, but as we mentioned, some packets just have a type and don't have a sub-type.

As the docs say, after the hello packet is sent to the client, the client will send the ServerListRequest (C1 F4 06) packet to the server. This packet is similar to the hello packet in definition, but the server will respond with the ServerListResponse (C2 F4 06) packet, and this one is worth examining to understand the dynamic headers we discussed earlier.

| Index | Length                    | Data Type               | Value | Description                                     |
| ----- | ------------------------- | ----------------------- | ----- | ----------------------------------------------- |
| 0     | 1                         | Byte                    | 0xC2  | [Packet type]                                   |
| 1     | 2                         | Short                   |       | Packet header - length of the packet            |
| 3     | 1                         | Byte                    | 0xF4  | Packet header - packet type identifier          |
| 4     | 1                         | Byte                    | 0x06  | Packet header - sub packet type identifier      |
| 5     | 2                         | ShortBigEndian          |       | ServerCount                                     |
| 7     | ServerLoadInfo.Length * N | [ServerLoadInfo]        |       | Servers                                         |

ServerLoadInfo Structure (Length: 4 Bytes):

| Index | Length | Data Type         | Value | Description    |
| ----- | ------ | ----------------- | ----- | -------------- |
| 0     | 2      | ShortLittleEndian |       | ServerId       |
| 2     | 1      | Byte              |       | LoadPercentage |

In this packet we can see the dynamic header in action.

Defining the Scope

After reviewing the implementation, I decided the scope of this first effort: we are going to implement up to the also mythical login screen. That covers the client and server handshake, server list, and client server selection. I believe this will be enough for this first article.

To achieve this, we need to review the implementation and gather all the components and required pieces that we have to put together. As we recently reviewed the packets, which are the foundation of the MU Online protocol, here is the list of packets involved for this scope:

  • Hello (C1 00 01) - By Server
  • ServerListRequest (C1 F4 06) - By Client
  • ServerListResponse (C2 F4 06) - By Server
  • ConnectionInfoRequest (C1 F4 03) - By Client
  • ConnectionInfo (C1 F4 03) - By Server
  • GameServerEntered (C1 F1 00) - By Server
Note

The bytes in parentheses after each packet name reflect the packet identification using the packet type, packet group, and packet sub-group. We are omitting the length field and the rest of the payload bytes.

So the flow of this initial communication will be something like:

Architecture Overview

In the diagram above you can identify that we have 3 main components: the Client, the ConnectServer, and the GameServer. MU Online's architecture separates the Discovery/Entry phase from the Gameplay phase. This separation provides some nice benefits:

  • Load Balancing: A single entry point (ConnectServer) can distribute players across multiple GameServer instances (e.g., Server 1, Server 2, Sub-Server 1).
  • Scalability: We can add more Game Servers horizontally without changing the client configuration. The client only needs to know the IP of the Connect Server, which as we can see in the diagram is the one responsible for providing Game Server information.
  • Isolation: If a Game Server crashes or restarts, the Connect Server remains online, allowing players to see the server status and reconnect to other available servers.

ConnectServer - Discovery/Entry Phase

The ConnectServer is the Gateway to our infrastructure. It listens on port 44405 and is the first point of contact for any client.

Core Responsibilities:

  • Handshake: Handles the initial handshake with the client.
  • Server List Management: Maintains a registry of available Game Servers.
  • Client Discovery: Responds to client requests for the Server List.
  • Redirection: When a user selects a server, it provides the specific IP address and Port of that Game Server.
Caution

Something to notice: after the client selects a game server and connects to it, the connection to the ConnectServer is typically closed shortly after.

GameServer - Gameplay Phase

The GameServer is the Engine of the world. It handles all logic once the player has selected a server, and it starts listening on port 55901 by default.

Core Responsibilities:

  • Authentication: Validates User/Password (Login Packet handling).
  • World State: Manages maps, player positions, monsters, and items.
  • Interaction Logic: Handles movement, combat, trading, chat, and skills.
  • Persistence: Reads/Writes character data to the Database.
  • Session Management: Maintains the persistent TCP connection for the duration of play.

Let's Start Building

Well, with all of this context we have everything to start playing around and working on our server. We are going to build our simple server, but what about the client? For that, I found a couple of clients also on GitHub that mention they work with OpenMU. The same team has their own client implementation, but I found one that I could compile from my Mac, so I am going to use that one. You can find the client here.

Before we continue, something worth mentioning is that after the initial discovery and review of the flow that we are going to cover in this first effort, I found two important things related to the packets and flow involved:

  • Only one of the packets has a payload from client to server: the ConnectionInfoRequest packet (which should be encrypted by XOR32 according to the docs, since all client->server traffic uses XOR32 at minimum). But in this case, the client is sending the serverId unencrypted, so we don't need to implement XOR32 for this scope.
  • For the last packet, GameServerEntered, this is like the "hello packet" that the GameServer sends to the client as soon as it connects. If you inspect the definition, you'll find that this packet has a payload with some values like PlayerId, VersionString, and Version in binary. After some research, I found these values are hardcoded in OpenMU, so we are going to use the same values.

Now we start with our implementation. For that, we can create our new Rust project. We are going to use a Rust workspace. As this involves a fair amount of code, I won't show all of it here. Instead, I will highlight the most significant parts and explain from there. Some snippets are simplified for clarity, the full implementation is in the repo.

The Protocol Crate

One of the central pieces of the architecture will be a crate that we are going to name mu-protocol. Here we are going to define two central pieces for our client and server exchange: our Packet definition and Codec.

#[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum RawPacketType {
    C1,
    C2,
    C3,
    C4,
}

impl RawPacketType {
    pub fn header_length(self) -> usize {
        match self {
            Self::C1 | Self::C3 => 2,
            Self::C2 | Self::C4 => 3,
        }
    }
}

As we mentioned before, the protocol uses the first byte as a frame identifier. The protocol names it the PacketType (see PacketTypes.md), and it defines the size of the packet and the encryption. So we create a simple enum to mirror this. header_length applies the logic from the definition for determining the header length: remember C1 and C3 are small (defined by one byte) and C2 and C4 are the larger ones (defined by two bytes).

Then, we have our RawPacket definition. It's a pretty simple struct with some helper methods to make our journey easier. The idea is straightforward: it's a way to create a valid Packet based on the minimal protocol header definition, representing the packets exchanged between client and servers.

pub struct RawPacket {
    bytes: Bytes,
    packet_type: RawPacketType,
}

impl RawPacket {
    pub fn try_new(bytes: Bytes) -> Result<Self, ProtocolError> {
        let declared_len = declared_length_from_prefix(&bytes)?;
        let actual_len = bytes.len();
        if declared_len != actual_len {
            return Err(ProtocolError::LengthMismatch {
                declared: declared_len,
                actual: actual_len,
            });
        }

        let packet_type = bytes[0].try_into()?;
        Ok(Self { bytes, packet_type })
    }

    pub fn header_codes(&self) -> (Option<u8>, Option<u8>) {
        let header_len = self.packet_type.header_length();
        (
            self.bytes.get(header_len).copied(),
            self.bytes.get(header_len + 1).copied(),
        )
    }
}

The Codec

And the codec. But what is a codec in this context? A codec (coder/decoder) is a component that knows how to transform raw bytes from a stream into structured messages (decoding) and back again (encoding). Since TCP gives us a continuous stream of bytes with no message boundaries, we need something to frame individual packets, that's what our codec does. We are using tokio_util::codec, which provides the Decoder and Encoder traits that integrate directly with Tokio's async I/O. This way, we get a clean Stream of RawPacket items coming in and a Sink for sending them out, without having to manually manage byte buffers.

use bytes::BytesMut;
use tokio_util::codec::{Decoder, Encoder};

pub struct PacketCodec;

impl Decoder for PacketCodec {
    type Item = RawPacket;
    type Error = ProtocolError;

    fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
        let Some(declared_len) = declared_length_from_prefix(src)? else {
            return Ok(None);
        };

        if src.len() < declared_len {
	        // Tell tokio-util how many bytes we still need so it can size
            // the next read syscall appropriately.
            src.reserve(declared_len - src.len());
            return Ok(None);
        }

        // split_to advances the buffer past this frame; freeze yields an
        // immutable Bytes (zero-copy, Arc-backed) for RawPacket to own.
        let packet = src.split_to(declared_len).freeze();
        Ok(Some(RawPacket::try_new(packet)?))
    }
}

impl Encoder<RawPacket> for PacketCodec {
    type Error = ProtocolError;

    fn encode(&mut self, item: RawPacket, dst: &mut BytesMut) -> Result<(), Self::Error> {
        dst.extend_from_slice(item.as_slice());
        Ok(())
    }
}

The TCP Servers

As we saw before, we are going to need a ConnectServer and a GameServer. These for now will be very simple TCP listeners, and of course we are going to use the well-known Tokio for everything related to async I/O.

For this, we have a simple function on each server component that creates a TCP listener and enters a loop for accepting connections:

async fn run_server(config: Config) -> Result<()> {
    let listener = TcpListener::bind(&config.bind_addr)
        .await
        .with_context(|| format!("failed to bind to {}", &config.bind_addr))?;

    info!("Connect Server is listening");
    let config = Arc::new(config);
    // Pin the future so we can poll it repeatedly across select! iterations.
    let mut shutdown = Box::pin(tokio::signal::ctrl_c());
    loop {
        tokio::select! {
            _ = &mut shutdown => {
                info!("shutting down server");
                break;
            }
            accepted = listener.accept() => {
                let (socket, peer_addr) = accepted.context("failed to accept client connection")?;
                info!(peer = %peer_addr, "Client connected");

                let client_config = Arc::clone(&config);
                tokio::spawn(async move {
                    if let Err(e) = handle_client(socket, peer_addr, client_config).await {
                        warn!(peer = %peer_addr, error = %e, "client handling failed");
                    }
                });
            }
        }
    }

    Ok(())
}

Both ConnectServer and GameServer have an almost identical run_server function that receives some sort of Config relevant to each server. This function launches the TCP listener and keeps things simple for now.

Where ConnectServer and GameServer differ is in the handle_client function, which is responsible for handling each client connection.

Handling the Client Connection

If you remember, the ConnectServer is the entry point and where almost all the communication exchange for this effort happens. So focusing on the ConnectServer, let's look at its handle_client:


async fn handle_client(
    stream: TcpStream,
    peer_addr: SocketAddr,
    config: Arc<ConnectConfig>,
) -> Result<()> {
    let mut framed = Framed::new(stream, PacketCodec);
    let hello_packet = RawPacket::try_from_vec(vec![C1, 0x04, 0x00, 0x01]);
    framed.send(hello_packet).await

    while let Some(result) = framed.next().await {
        let packet = result?;
        let action = handlers::handle_packet(&config, &packet, peer_addr);
        match action {
            handlers::PacketHandling::Reply(response) => {
                framed.send(response).await?;
            }
            handlers::PacketHandling::Ignore => {}
            handlers::PacketHandling::Disconnect => break,
        }
    }

    Ok(())
}

The key here is that as soon as the Client connects to the ConnectServer, we send the Hello packet as we explained in the communication flow. That will cause the client to request the server list, and then we handle the subsequent requests. That part is resolved by handlers::handle_packet.

If we run the ConnectServer and point a client at it, we can see the entire handshake flow playing out in the logs:

RUST_LOG=debug cargo run -p connect-server
2026-02-17T19:25:10.267046Z  INFO connect_server: starting connect server bind_addr=0.0.0.0:44405 server_count=2
2026-02-17T19:25:10.267325Z  INFO connect_server: Connect Server is listening
2026-02-17T19:25:43.970652Z  INFO connect_server: Client connected peer=127.0.0.1:53368
2026-02-17T19:25:43.971023Z DEBUG connect_server: sent hello packet peer_addr=127.0.0.1:53368
2026-02-17T19:25:43.993037Z DEBUG connect_server: received packet packet=RawPacket(len=4, bytes=[C1, 04, F4, 06])
2026-02-17T19:25:43.993071Z  INFO connect_server::handlers: Server list requested peer=127.0.0.1:53368 server_count=2
2026-02-17T19:25:46.543965Z DEBUG connect_server: received packet packet=RawPacket(len=6, bytes=[C1, 06, F4, 03, 01, 00])
2026-02-17T19:25:46.544041Z  INFO connect_server::handlers: Connection info requested peer=127.0.0.1:53368 server_id=1
2026-02-17T19:25:46.544065Z DEBUG connect_server::packet: Building connection info ip=127.0.0.1 port=55901
2026-02-17T19:25:46.598691Z  INFO connect_server: connect-server client disconnected peer=127.0.0.1:53368

You can see the exact flow we described earlier: the server sends the hello, the client requests the server list (C1 04 F4 06), picks a server (C1 06 F4 03 01 00 with server_id=1), gets the connection info back, and disconnects. The raw bytes in the logs match our packet definitions perfectly.

Parsing Packets

For parsing, we are going to use the powerful enum and pattern matching system of Rust to build a packet parser for the packets expected by the ConnectServer:

pub enum ConnectServerPacket {
    ServerListRequest,
    ConnectionInfoRequest {
        server_id: u16,
    },
    Unknown {
        code: Option<u8>,
        sub_code: Option<u8>,
    },
}

impl ConnectServerPacket {
    pub fn parse(packet: &RawPacket) -> Result<Self, ProtocolError> {
        let (code, sub_code) = packet.header_codes();
        match (code, sub_code) {
            (Some(0xF4), Some(0x06)) => Ok(Self::ServerListRequest),
            (Some(0xF4), Some(0x03)) => {
                let data = packet.as_slice();
                let server_id = u16::from_le_bytes([data[4], data[5]]);
                Ok(Self::ConnectionInfoRequest { server_id })
            }
            _ => Ok(Self::Unknown { code, sub_code }),
        }
    }
}

We determine the request type by inspecting the code and sub_code, the two-level identification system that we talked about before. Once we identify the request, we can easily build the response. With this, handle_packet is pretty straightforward:

pub fn handle_packet(
    config: &ConnectConfig,
    packet: &RawPacket,
    peer: SocketAddr,
) -> PacketHandling {
    let parsed = match ConnectServerPacket::parse(packet) {
        Ok(p) => p,
        Err(_) => return PacketHandling::Disconnect,
    };

    match parsed {
        ConnectServerPacket::ServerListRequest => {
            let response = build_server_list_response(&config.servers);
            match response {
                Ok(packet) => PacketHandling::Reply(packet),
                Err(e) => { PacketHandling::Disconnect }
            }
        }
        ConnectServerPacket::ConnectionInfoRequest { server_id } => {
            let server = config.servers.iter().find(|s| s.id == server_id);
            match server {
                Some(server) => {
                    let response = build_connection_info(server);
                    match response {
                        Ok(packet) => PacketHandling::Reply(packet),
                        Err(e) => { PacketHandling::Disconnect }
                    }
                }
                None => { PacketHandling::Ignore }
            }
        }
        ConnectServerPacket::Unknown { code, sub_code } => {
            warn!(?code, ?sub_code, "unknown packet");
            PacketHandling::Ignore
        }
    }
}

Building Responses

And finally, after identifying the request we build the corresponding response, following each packet definition. Again, thanks to the OpenMU team for the great documentation.

// Wire layout constants for C2-F4-06 ServerListResponse.
// See: docs/OpenMU/Packets/C2-F4-06-ServerListResponse_by-server.md
const HEADER_SIZE: usize = 3 + 1 + 1 + 2; // C2 framing(3) + code(1) + sub_code(1) + server_count(2)
const ENTRY_SIZE: usize = 4; // server_id(2) + load_percentage(1) + padding(1)

/// Builds a C1-F4-03 ConnectionInfo response packet.
/// See: docs/OpenMU/Packets/C1-F4-03-ConnectionInfo_by-server.md
///
/// Wire layout (22 bytes total):
///   [0]    C1 header
///   [1]    length (22)
///   [2]    code   (0xF4)
///   [3]    sub    (0x03)
///   [4..20] IP address as a null-terminated ASCII string in a 16-byte field
///   [20..22] port as little-endian u16
pub fn build_connection_info(ip: Ipv4Addr, port: u16) -> Result<RawPacket> {
    let ip_str = ip.to_string();
    let mut buf = BytesMut::with_capacity(22);
    buf.put_u8(C1);
    buf.put_u8(22);
    buf.put_u8(0xF4);
    buf.put_u8(0x03);

    // IPv4 max string is "255.255.255.255" (15 chars), which fits in the
    // 16-byte protocol field. Remaining bytes are null-padded.
    let ip_bytes = ip_str.as_bytes();
    buf.put_slice(ip_bytes);
    for _ in ip_bytes.len()..16 {
        buf.put_u8(0x00);
    }

    buf.put_u16_le(port);
    RawPacket::try_new(buf.freeze())
}

/// Builds a C2-F4-06 ServerListResponse packet.
/// See: docs/OpenMU/Packets/C2-F4-06-ServerListResponse_by-server.md
///
/// Wire layout:
///   [0]      C2 header
///   [1..3]   total length (big-endian u16) — uses C2 because the list can exceed 255 bytes
///   [3]      code   (0xF4)
///   [4]      sub    (0x06)
///   [5..7]   server count (big-endian u16)
///   [7..]    N × 4-byte ServerLoadInfo entries
pub fn build_server_list_response(servers: &[ConfiguredGameServer]) -> anyhow::Result<RawPacket> {
    let payload_len = HEADER_SIZE + servers.len() * ENTRY_SIZE;
    let packet_len: u16 = payload_len
        .try_into()
        .map_err(|_| anyhow::anyhow!("packet length overflow"))?;

    let mut buf = BytesMut::with_capacity(payload_len);
    buf.put_u8(C2);
    buf.put_u16(packet_len);
    buf.put_u8(0xF4);
    buf.put_u8(0x06);

    let server_count: u16 = servers
        .len()
        .try_into()
        .map_err(|_| anyhow::anyhow!("too many servers for u16 count field"))?;

    buf.put_u16(server_count);

    for server in servers {
        buf.put_u16_le(server.id);
        buf.put_u8(server.load_percentage);
        buf.put_u8(0); // padding byte defined by protocol
    }

    RawPacket::try_new(buf.freeze())
}

An important note here is that I am omitting a lot of improvements, features, and security measures that you may notice in our TCP listener on purpose. The idea is to revisit and implement them in future articles, so the code is kept pretty simple for now. As the series advances, the implementation will evolve.

Caution

Also, since this is a work in progress, a lot of the implementation will and must change as other features and flows are added and we play around with more complex game mechanisms.

The Result

With this initial implementation, we achieve something great and very fun for me: the mythical login flow!

Our ConnectServer responding with the server list, just like the old days:

mu-select-server

After selecting a server, the client connects to our GameServer and we get the login screen:

mu-login

Seeing that login screen pop up from a server I wrote myself was honestly one of the most satisfying moments I've had in a while. What started as pure nostalgia turned into a deep dive into revisiting binary protocols, packet framing, and async networking in Rust.

A few things that surprised me during this process:

  • How well-documented the protocol actually is, thanks to the OpenMU team. Without their work, this would have been a much harder journey.
  • How clean the protocol design is for a game from 2001. The two-level packet identification, the separation between ConnectServer and GameServer, it's a simple and working architecture.

There's a lot left to do: authentication, character selection, and eventually the actual game world. In the next article, we'll tackle the login flow and start dealing with encrypted client packets.

All the code can be found in the repository. Feel free to explore, experiment, and leave comments! Happy coding!

Share article
rocket

© 2023 KungFuDev made with love / cd 💜

Heavily inspired/copied from shuttle.rs