Fabric Node Setup

Intro

Eluvio Fabric Nodes are systems that operate 2 pieces of software: elvmasterd and qfab. The elvmasterd process is the blockchain component of the Fabric node. Transactions and information held on the blockchain are processed through a local copy of the chain. Through it, the qfab process is able to get information about Parts that are part of the content fabric. Both software components are expected to run on the same hardware. While not strictly necessary, it is common to proxy access to both services via some web proxy. The most common reverse proxies used in Fabric Node deployments are nginx and haproxy.

This document will cover the hardware, software, and network considerations that go into deploying an Eluvio Fabric Node. Specific installation instructions differ between each deployment. To make this document more readable and digestible, general installation and setup will be described as it pertains to preparing a system for deployment.

Types of Node

To understand the hardware and network requirements needed to deploy Fabric Node, it is important to review the common configurations, or types, of node that connect to the Content Fabric. Broadly, there are two common types of node, and while the basic software configuration is the same between them, the hardware requirements and the network access needs distinguish them from one another. The two types are:

  • Full or Serving fabric nodes need public IP access from the entire Internet on HTTP and HTTPS. These nodes service requests from all users of the Content Fabric. These nodes are called Full nodes because they can be used for the full range of use cases – from end-user traffic to content ingest (live streaming and VOD).
  • Publish Only fabric nodes are used for dedicated ingest of Live and VOD content. These nodes will not serve end-user traffic and can be configured to only connect to other fabric nodes. If network policies allow a Publish Only node allow other fabric nodes to connect to it, this can allow for more flexible deployment options, but it is not required.

Server Requirements

The type of node, as noted above, will be the main driver of hardware requirements. Shown below are two representativeconfigurations. The Baseline hardware configuration shows a typical “full” fabric node focused on content serving with minimal ingest. The Publish Only hardware configuration represents a common publish only node that may handle 16 4K streams.

Baseline

  • CPU: 32 physical CPU Cores (Intel/AMD)
  • Memory: 256GB RAM
  • Storage:
    • NVMe: 8TB
    • Disk: 100TB (usable after RAID*)
  • Network:
    • To Internet: 25Gbps
  • Optional:

Publish Only

  • CPU: 64 physical CPU Cores (Intel/AMD)
  • Memory: 512GB RAM
  • Storage:
    • NVMe: 16TB
    • Disk: 32TB (usable after RAID*)
  • Network:
    • To Internet: 10Gbps
    • Stream Source: 10Gbps
  • Optional:

With these resources the fabric node can accommodate ingest video processing, and serving high volumes of web traffic.

Next we go in depth with the specific components and how they scale with node type and load.

CPU

For a Publish Only fabric node, the CPU requirements will scale with the number of streams being ingested. An additional factor to consider is the resolution of the feeds:

  • 4-6 cores for each 4K video feed being ingested
  • 2-3 cores for each HD video feed being ingested

A 64 core system should be able to accommodate 16x 4K feeds.

AMD or Intel

Fabric nodes are built for the x86_64 instruction set. AMD and Intel chips are supported and nodes can use either. To meet the density needs of many workloads, AMD is preferred.

Memory

256GB of RAM should be considered the minimum needed to support a general node serving with minimal video processing. Calculating memory usage based on workload is not as straightforward as the CPU core calculations.

512GB of memory is the current recommendation across all workload combinations, especially nodes dedicated to video ingest.

As of early 2026, the state of memory prices and shortages may necessitate fabric nodes that do not have the recommended memory of 512GB. 384GB is sufficient in a memory constrained environment with the caveat that some workloads will not perform optimally.

OS Storage

A minimum of 50GB of reliable storage should be allocated for the Operating System. This storage should be on a different set of devices, like M.2 NVMe, dedicated for the OS only, and ideally mirrored for redundancy.

NVMe Storage

Fabric nodes have many high I/O tasks, from internal databases to Content Parts generated during ingest, that necessitate the need for NVMe storage. For a node focused on serving, 8TB is more than sufficient. For Publish Only fabric nodes, the single largest factor in determining the amount of NVMe space needed is the quantity and source quality of live streams being ingested. Lots of factors go into the calculation and are determined on a case-by-case basis, but a general recommendation of 16TB usable NVMe works in most workflows and use cases. This storage can be as simple as 2x 8TB devices, or 4x 4TB, etc.

Non-NVMe Storage

Non-NVMe storage, like SAS/SATA disks, are used for long-term part storage. The speed of this storage is not as critical as the NVMe storage, so the type, composition, and usable storage can be tailored for the node type and the market prices and availablity present when procuring a node. This storage should be configured in a reliable RAID5 or ZFS RAIDZ pool.

Serving fabric nodes should have at least 100TB of usable storage.

Publish Only fabric nodes do not have the same long-term storage needs, so they require far less non-NVMe storage. A good rule is to have 2x the NVMe storage. So a system with 16TB of NVMe should have a usable 32TB of non-NVMe storage.

NIC / Network Interfaces

As with almost all aspects of a node’s hardware specifications, the quantity and speed of network interfaces will depend on the node type.

Full/Serving fabric nodes are expected to handle high loads of traffic as it serves content to Content Fabric users. A single 25Gbps interface is the bare minimum needed in this type of node. Dual 25Gbps NICs in a bonded configuration is recommended to ensure redundancy.

Publish Only fabric nodes may be constrained by the internal network configuration the node is placed in. Ideally, a publishing node will have two (2) NICs: one NIC that will connect or route to the internet for publishing to the fabric, and one NIC for receiving ingest sources (typically live streams). The speed of these NICs is determined by the amount of ingest being done. The minimum NIC speeds should be 10Gbps.

For Publish Only nodes, the NIC speed minimum is simplified to 10Gbps due to available speeds not being granular (e.g. 1Gbps, 2.5Gbps, 10Gbps, 25Gbps, etc). To better plan what is appropriate, note the following:

  • The “internal” NIC needs to accomodate the source streams. The number of streams, the quality of the streams, the frame rates, and the method of delivery all determine the amount of data that the “internal” needs to receive. If a total of 1.5Gbps across all streams is to be received, a 1Gbps NIC cannot accommodate that.
  • The “publishing” NIC will publish to multiple public nodes at once. This serves many purposes, including redundancy. Publish Only nodes are limited to 2 nodes per publish to minimize bandwidth consumption at the publishing/source location. This translates to 2x bandwidth used in publishing, relative to the ingest volume. For the prior example of 1.5Gbps ingested across all streams, that means the publishing NIC will need to accomodate 3Gbps of egress data transfer. A 2.5Gbps NIC would not be able to accomodate that.

Video Encode/Decode/Transcode Offload

Eluvio Fabric Nodes regularly transcode video – either during VOD and Live ingest, or when a different quality ladder is requested upon serving content. Much of this is done using the CPU. To free up the CPU or to handle certain workloads and capabilities, fabric nodes have the ability to offload encode/decode/transcode operations to specialized hardware. The viability of these offload systems can be significant or use-case dependent (e.g. allowing HDR10).

Due to number of different hardware makes, models, and form factors on the market, it is best to discuss this with Eluvio Pre-Sales Engineering. The following technologies are currently supported.

NVIDIA

Eluvio Fabric Nodes can leverage CUDA cores and nvenc/nvdec encoder/decoders that are on NVIDIA GPUs. Every NVIDIA GPU supports a different number of nvenc and nvdec offload engines. The capabilities of nvenc and nvdec are also different between hardware generations. For example, AV1 is not supported on 6th Gen Turing based GPUs, but is fully supported with 8th Gen Ada Lovelace GPUs.

NVIDIA GPUs are all packages as PCIe AICs (add-in-card) different form factors (single and double slot), PCIe generation requirements, and power loading. Adding a GPU or two will increase the TDP of the system significantly. For example, a single RTX 4000 Ada will add 130W to the node’s TDP and uses a single full height PCIe Gen4 x16 slot in the chassis.

NVIDIA GPUs represent the most common deployment option among Content Fabric Nodes. If opting for an NVIDIA-based offload solution, the recommended minimum is a single RTX 4000 Ada (AD104).

NetInt

NetInt provides an ASIC-based “VPU” (Video Processing Unit) that specializes in video operations. The “Quadra” line of Codensity G5 solutions come in multiple form factors, from M.2 and U.2 to PCIe AIC. For example, a U.2 form factor T1U can be installed in an NVMe drive bay instead of using a PCIe slot in the chassis. NetInt solutions also have much lower power consumption requirements than most GPU-based options.

NetInt-based offload is relatively newer compared to NVIDIA. The best general purpose recommendation is not well established. For systems that are internally space constrained with NVMe U.2 disk bays available, the recommendation is a T1U. If an internal PCIe Gen 4.0x16 AIC slot is available, the T2A is another recommendation.

Software Architecture

Operating Environment

Eluvio currently runs an Ubuntu/Debian Linux build and server environment. The Core Applications are validated on Ubuntu 22.04 and 24.04. Support for other distributions based on Arch Linux is planned.

Core Applications

Fabric nodes are fundamentally 2 daemons with a reverse proxy that unifies and manages the external facing elements of the node. The core components are:

qfab

qfab is the main process that provides the public APIs, content serving, Part distribution, and content transformation (e.g. transcoding) operations that underpin the Content Fabric.

elvmasterd

Content Fabric state is recorded in a decentralized and distributed blockchain ledger. The process that implements this EVM blockchain is elvmasterd. All Fabric Nodes use elvmasterd to maintain a copy the chain. Validators on the chain are elvmasterd instances that mint blocks and ensure transactions occur.

nginx/haproxy

An HTTP/HTTPS reverse proxy is used to manage SSL/TLS certificates and route requests to qfab and elvmasterd. The majority of Fabric Nodes use nginx.

Hardening and Security Considerations

Full server hardening is not in the scope of this document since each organization has their own processes, tools, and baseline configurations. The following is meant to illustrate the best practices used in a typical Fabric Node deployment. The hope is to provide information that can be layered on top of participant-specific hardening.

Fabric Node should run a server-side firewall. This server firewall should be set to allow incoming traffic for the common web ports of 80/TCP/HTTP and 443/TCP/HTTPS. The HTTP port is supported for compatability purposes, but the reverse proxy is always configured to issue an HTTP 301 redirection to the HTTPS port.

Full fabric fodes need to also allow the Ethereum wire protocol access. This is configurable at install time, but it is typically TCP and UDP 40304

Remote access is up to the organization setting up the Fabric Node. Remote access over SSH may be needed from time to time. It is recommended that password auth for SSH is disabled at a minimum.

Network Configuration

As evidenced by the different NIC requirements, Full nodes and Publish Only nodes are deployed in different ways on the network. Full fabric nodes are public systems that participate in all aspects of the Content Fabric operation: they accept API requests and serve content over HTTP/HTTPS, other blockchain participants will peer with the node when syncing and participating on the chain, and they make parts available to other peer nodes that may be missing parts. Publish Only fabric nodes need to connect to peers on the blockchain as well as connect to peer nodes over HTTPS to publish parts generated on the Publish Only node. Publish Only nodes do not need to allow wide-open inbound connections to function, but some connectivity to the HTTPS API of a Publish Only node is needed for basic management. This can be limited via constrained firewall rules or an administrative tunnel.

Eluvio Managed Nodes

Eluvio can manage a node to ensure timely updates, optimized tuning, advanced troubleshooting, and application of deployment best practices. As noted in the Hardening section, this management is typically done over SSH. This SSH connection can also function as the HTTPS management tunnel used to access Fabric APIs. Eluvio can support SSH access on any port, direct or with ProxyJump, and with any customer requirements needed to establish authentication (e.g. 2 Factor or Key-based auth).

A common pattern used by customers is to allow SSH and tunnels to be setup to and from Eluvio Fabric Node IPs already allowed via allow lists setup for Content Fabric protocol traffic.

Connectivity

The following diagrams show the network connectivity for each node type.

Full / Serving

Earlier diagrams showed a “Core” network. Removed to clarify concept. The “Core” network is an optimization Eluvio employs that customers are unlikely to implement.

Publish Only

The “Ingest Network” denotes a customer network where VOD or Live content is made available. This may be a unicast or multicast network.

Full fabric nodes exist on a network with connectivity to the Internet. They may be directly connected or in a DMZ with rules allowing inbound access to the Content Fabric ports.

Publish Only fabric nodes need to egress to the Internet to connect to other Content Fabric nodes for parts distribution. Access to the Internet also streamlines management and updates for critical software components. Inbound access is not needed from the Internet unless a Management SSH connection or tunnel is allowed.

Additionally, Publish Only fabric nodes get access to content from systems that are not Internet routable. These systems are usually accessed on a separate network, denoted here as the “Ingest Network”, and the Publish Only fabric node is made a member of this network. Delivery of content can be unicast or multicast delivery via protocols like HTTPS (VOD), RTMP, SRT, MPEGTS, etc.

Firewall Rules

Given the previous notes on hardening and the ports used, The following firewall rules apply to all fabric nodes of the listed type:

Full / Serving

Ingress

  • 80/TCP from the entire Internet
  • 443/TCP from the entire Internet
  • 40304/TCP from the entire Internet
  • 40304/UDP from the entire Internet
  • Management Access (usually SSH) as determined by customer

Egress

  • 80/TCP to the entire Internet
  • 443/TCP to the entire Internet
  • 40304/TCP to the entire Internet
  • 40304/UDP to the entire Internet

Publish Only

Ingress

  • 80/TCP via IP allow list or admin tunnel1
  • 443/TCP via IP allow list or admin tunnel1
  • Management Access (usually SSH) as determined by customer

Egress

  • 80/TCP to the entire Internet2
  • 443/TCP to the entire Internet2
  • 40304/TCP to IP allow list
  • 40304/UDP to IP allow list
  • Management tunnel if no direct SSH

  1. HTTP/S ingress via a tunnel would require protocol specific ingress/egress ports not listed here ↩︎ ↩︎

  2. HTTP/S egress can be constrained, but the allow list is large and situational ↩︎ ↩︎

Publish Only Example

This diagram shows a Publish Only node configuration with 2+ publishing nodes. In this example, Eluvio is provided SSH access to a single Content Fabric node in the customer network boundary (e.g. DMZ). From here, Eluvio will setup management tunnels for API access and ProxyJump SSH to the other Eluvio Nodes in the customer network.

When Content Fabric Nodes publish parts to peers, all active peers in a given storage partition will get the parts from the part creator. To minimize network traffic on the customer network, each Publish Only node is configured to publish parts to only 2 nodes. As noted in the NIC / Network Interfaces section above, this limits the egress traffic to 2x the ingest volume.

Eluvio requests all nodes are

Additional Network Considerations

NTP

All Eluvio Fabric Nodes, regardless of type, need access to reliable Network Time Protocol (NTP) servers. If access to public NTP servers is blocked, an preferred alternative is needed.

DNS

Domain Name Service (DNS) is used by nodes to locate other nodes. Slow or filtered DNS resolution can impact noce performance.