Chapter 3. Operations

Table of Contents

1. Daemons and Configuration
2. HAProxy and SSL Termination
3. Service Management
3.1. Using perpd
3.2. Using systemd

This chapter covers the operational aspects of running VRTQL WebSocket servers in production environments. It addresses daemonization, configuration, SSL termination with a reverse proxy, and service management using process supervisors.

1. Daemons and Configuration

A VRTQL WebSocket server is a long-running process that listens for incoming connections on a configured host and port. In production, these servers typically run as daemons — background processes detached from any controlling terminal. There are two common approaches to daemonizing a VRTQL server: using the traditional fork()/setsid() pattern within the program itself, or delegating process management to an external supervisor (covered in Service Management). The latter approach is generally preferred as it provides better control over the process lifecycle.

When running a server as a daemon, the server's host, port, number of worker threads, and other parameters are typically read from a configuration file or environment variables at startup. While the VRTQL library itself does not mandate a particular configuration format, a common approach is to use a simple key-value configuration file or environment variables. The following illustrates a minimal configuration pattern:

#include <stdlib.h>
#include <vws/server.h>

int main(int argc, const char* argv[])
{
    // Read configuration from environment or defaults
    cstr host    = getenv("VWS_HOST") ? getenv("VWS_HOST") : "0.0.0.0";
    int  port    = getenv("VWS_PORT") ? atoi(getenv("VWS_PORT")) : 8181;
    int  threads = getenv("VWS_THREADS") ? atoi(getenv("VWS_THREADS")) : 10;

    // Create and configure server
    vrtql_msg_svr* server = vrtql_msg_svr_new(threads, 0, 0);
    server->process       = my_process_handler;

    // Run (blocks until server is stopped)
    vrtql_msg_svr_run(server, host, port);

    // Cleanup
    vrtql_msg_svr_free(server);
    vws_cleanup();

    return 0;
}

The server binds to the specified host and port and begins accepting connections. Binding to 0.0.0.0 makes the server accessible on all network interfaces, whereas 127.0.0.1 restricts it to local connections only. In production deployments behind a reverse proxy, binding to 127.0.0.1 is recommended so that external traffic is directed through the proxy.

The number of worker threads should be tuned based on the workload. For I/O-bound processing (such as database queries), more threads can be beneficial. For CPU-bound processing, the number of threads should generally match the number of available CPU cores. The connection backlog and queue size parameters can typically be left at their defaults unless the server is expected to handle a very high rate of incoming connections or messages.

The server also supports an inetd mode via the vws_tcp_svr_inetd_run() function. In this mode, the server does not bind to a port itself. Instead, an external program such as tcpserver (from the ucspi-tcp package) accepts the connection and passes the open file descriptor to the server. This is useful for deployments that prefer to delegate connection acceptance to an external tool.