Frequently Asked Questions



What is UCX?

UCX is a framework (collection of libraries and interfaces) that provides efficient and relatively easy way to construct widely used HPC protocols: MPI tag matching, RMA operations, rendezvous protocols, stream, fragmentation, remote atomic operations, etc.

What is UCP, UCT, UCS?

  • UCT is a transport layer that abstracts the differences across various hardware architectures and provides a low-level API that enables the implementation of communication protocols. The primary goal of the layer is to provide direct and efficient access to hardware network resources with minimal software overhead. For this purpose, UCT relies on low-level drivers such as uGNI, Verbs, shared memory, ROCM, CUDA. In addition, the layer provides constructs for communication context management (thread-based and application level), and allocation and management of device-specific memories including those found in accelerators. In terms of communication APIs, UCT defines interfaces for immediate (short), buffered copy-and-send (bcopy), and zero-copy (zcopy) communication operations. The short operations are optimized for small messages that can be posted and completed in place. The bcopy operations are optimized for medium size messages that are typically sent through a so-called bouncing-buffer. Finally, the zcopy operations expose zero-copy memory-to-memory communication semantics.

  • UCP implements higher-level protocols that are typically used by message passing (MPI) and PGAS programming models by using lower-level capabilities exposed through the UCT layer. UCP is responsible for the following functionality: initialization of the library, selection of transports for communication, message fragmentation, and multi-rail communication. Currently, the API has the following classes of interfaces: Initialization, Remote Memory Access (RMA) communication, Atomic Memory Operations (AMO), Active Message, Tag-Matching, and Collectives.

  • UCS is a service layer that provides the necessary functionality for implementing portable and efficient utilities.

How can I contribute?

  1. Fork

  2. Fix bug or implement a new feature

  3. Open Pull Request

How do I get in touch with UCX developers?

Please join our mailing list: or submit issues on github:

UCX mission

What are the key features of UCX?

  • Open source framework supported by vendors
    The UCX framework is maintained and supported by hardware vendors in addition to the open source community. Every pull-request is tested and multiple hardware platforms supported by vendors community.

  • Performance, performance, performance! The framework architecture, data structures, and components are designed to provide optimized access to the network hardware.

  • High level API for a broad range HPC programming models.
    UCX provides a high-level and performance-portable network API. The API targets a variety of programming models ranging from high-performance MPI implementation to Apache Spark. UCP API abstracts differences and fills in the gaps across interconnects implemented in the UCT layer. As a result, implementations of programming models and libraries (MPI, OpenSHMEM, Apache Spark, RAPIDS, etc.) is simplified while providing efficient support for multiple interconnects (uGNI, Verbs, TCP, shared memory, ROCM, CUDA, etc.).

  • Support for interaction between multiple transports (or providers) to deliver messages.
    For example, UCX has the logic (in UCP) to make ‘GPUDirect’, IB’ and share memory work together efficiently to deliver the data where it is needed without the user dealing with this.

  • Cross-transport multi-rail capabilities. UCX protocol layer can utilize multiple transports, event on different types of hardware, to deliver messages faster, without the need for any special tuning.

  • Utilizing hardware offloads for optimized performance, such as RDMA, Hardware tag-matching hardware atomic operations, etc.

What protocols are supported by UCX?

UCP implements RMA put/get, send/receive with tag matching, Active messages, atomic operations. In near future we plan to add support for commonly used collective operations.

Is UCX replacement for GASNET?

No. GASNET exposes high level API for PGAS programming management that provides symmetric memory management capabilities and build in runtime environments. These capabilities are out of scope of UCX project. Instead, GASNET can leverage UCX framework for fast end efficient implementation of GASNET for the network technologies support by UCX.

What is the relation between UCX and network drivers?

UCX framework does not provide drivers, instead it relies on the drivers provided by vendors. Currently we use: OFA VERBs, Cray’s UGNI, NVIDIA CUDA.

What is the relation between UCX and OFA Verbs or Libfabrics?

UCX is a middleware communication framework that relies on device drivers, e.g. RDMA, CUDA, ROCM. RDMA and OS-bypass network devices typically implement device drivers using the RDMA-core Linux subsystem that is supported by UCX. Support for other network abstractions can be added based on requests and contributions from the community.

Is UCX a user-level driver?

UCX is not a user-level driver. Typically, drivers aim to expose fine-grained access to the network architecture-specific features. UCX abstracts the differences across various drivers and fill-in the gaps using software protocols for some of the architectures that don’t provide hardware level support for all the operations.


What stuff should I have on my machine to use UCX?

UCX detects the exiting libraries on the build machine and enables/disables support for various features accordingly. If some of the modules UCX was built with are not found during runtime, they will be silently disabled.

  • Basic shared memory and TCP support - always enabled.

  • Optimized shared memory - requires knem or xpmem drivers. On modern kernels, CMA (cross-memory-attach) will also be used if available.

  • RDMA support - requires rdma-core or libibverbs library. UCX >= 1.12.0 requires rdma-core >= 28.0 or MLNX_OFED >= 5.0.

  • NVIDIA GPU support - requires CUDA >= 6.0. UCX >= 1.8 requires CUDA with nv_peer_mem support.

  • AMD GPU support - requires ROCm version >= 4.0.

Does UCX depend on an external runtime environment?

UCX does not depend on an external runtime environment.

ucx_perftest (UCX based application/benchmark) can be linked with an external runtime environment that can be used for remote ucx_perftest launch, but this an optional configuration which is only used for environments that do not provide direct access to compute nodes. By default this option is disabled.

Configuration and tuning

How can I specify special configuration and tunings for UCX?

UCX takes parameters from specific environment variables, which start with the prefix UCX_.

IMPORTANT NOTE: Setting UCX environment variables to non-default values may lead to undefined behavior. The environment variables are mostly intended for advanced users, or for specific tunings or workarounds recommended by the UCX community.

Where can I see all UCX environment variables?

  • Running ucx_info -c prints all environment variables and their default values.

  • Running ucx_info -cf prints the documentation for all environment variables.

UCX configuration file

UCX looks for a configuration file in {prefix}/etc/ucx/ucx.conf, where {prefix} is the installation prefix configured during compilation. It allows customization of the various parameters. An environment variable has precedence over the value defined in ucx.conf. The file can be created using ucx_info -Cf.

Build user application with UCX

In order to build the application with UCX development libraries, UCX supports a metainformation subsystem based on the pkg-config tool. For example, this is how pkg-config can be incorporated in a Makefile-based build:

program: program.c
        $(CC) program.c $(shell pkg-config --cflags --libs ucx)

When linking with static UCX libraries, the user must to list all required transport modules explicitly. For example, in order to support only cma and knem transports, the user have to use:

program: program.c
        $(CC) -static program.c $(shell pkg-config --cflags --libs --static ucx-cma ucx-knem ucx)

Currently, the following transport modules can be used with pkg-config:

Package nameProvided transport service
ucx-cmaShared memory using Linux Cross-Memory Attach
ucx-knemShared memory using High-Performance Intra-Node MPI Communication
ucx-xpmemShared memory using XPMEM
ucx-ibInfiniband based network transport
ucx-rdmacmConnection manager based on RDMACM

TCP, basic shared memory, and self transports are built into UCT and don’t need additional compilation actions.


The package ucx-ib requires static libraries for libnl and numactl, as a dependency of rdma-core. Most Linux distributions do not provide these static libraries by default, so they need to be built and installed manually. They can be downloaded from the following locations:

libnl on version 3.2.25)
numactl on version 2.0.14)

Network capabilities

Selecting networks and transports

Which network devices does UCX use?

By default, UCX tries to use all available devices on the machine, and selects best ones based on performance characteristics (bandwidth, latency, NUMA locality, etc). Setting UCX_NET_DEVICES=<dev1>,<dev2>,... would restrict UCX to using only the specified devices.
For example:

  • UCX_NET_DEVICES=eth2 - Use the Ethernet device eth2 for TCP sockets transport.

  • UCX_NET_DEVICES=mlx5_2:1 - Use the RDMA device mlx5_2, port 1

Running ucx_info -d would show all available devices on the system that UCX can utilize.

Which transports does UCX use?

By default, UCX tries to use all available transports, and select best ones according to their performance capabilities and scale (passed as estimated number of endpoints to ucp_init() API).
For example:

  • On machines with Ethernet devices only, shared memory will be used for intra-node communication and TCP sockets for inter-node communication.

  • On machines with RDMA devices, RC transport will be used for small scale, and DC transport (available with Connect-IB devices and above) will be used for large scale. If DC is not available, UD will be used for large scale.

  • If GPUs are present on the machine, GPU transports will be enabled for detecting memory pointer type and copying to/from GPU memory.

It’s possible to restrict the transports in use by setting UCX_TLS=<tl1>,<tl2>,.... ^ at the beginning turns the list into a deny list. The list of all transports supported by UCX on the current machine can be generated by ucx_info -d command.

IMPORTANT NOTE In some cases restricting the transports can lead to unexpected and undefined behavior:

  • Using rc_verbs or rc_mlx5 also requires ud_verbs or ud_mlx5 transport for bootstrap.

  • Applications using GPU memory must also specify GPU transports for detecting and handling non-host memory.

In addition to the built-in transports it’s possible to use aliases which specify multiple transports.

List of main transports and aliases
alluse all the available transports.
sm or shmall shared memory transports.
ugniugni_rdma and ugni_udt.
rcRC (=reliable connection), "accelerated" transports are used if possible.
udUD (=unreliable datagram), "accelerated" is used if possible.
dcDC - Mellanox scalable offloaded dynamic connection transport
rc_xSame as "rc", but using accelerated transports only
rc_vSame as "rc", but using Verbs-based transports only
ud_xSame as "ud", but using accelerated transports only
ud_vSame as "ud", but using Verbs-based transports only
cudaCUDA (NVIDIA GPU) memory support: cuda_copy, cuda_ipc, gdr_copy
rocmROCm (AMD GPU) memory support: rocm_copy, rocm_ipc, rocm_gdr
tcpTCP over SOCK_STREAM sockets
selfLoopback transport to communicate within the same process

For example:

  • UCX_TLS=rc will select RC, UD for bootstrap, and prefer accelerated transports

  • UCX_TLS=rc,cuda will select RC along with Cuda memory transports

  • UCX_TLS=^rc will select all available transports, except RC

IMPORTANT NOTE UCX_TLS=^ud will select all available transports, except UD. However, UD will still be available for bootstrap. Only UCX_TLS=^ud,ud:aux will disable UD completely.


Does UCX support multi-rail?


What is the default behavior in a multi-rail environment?

By default UCX would pick the 2 best network devices, and split large messages between the rails. For example, in a 100MB message - the 1st 50MB would be sent on the 1st device, and the 2nd 50MB would be sent on the 2nd device. If the device network speeds are not the same, the split will be proportional to their speed ratio.

The devices to use are selected according to best network speed, PCI bandwidth, and NUMA locality.

Is it possible to use more than 2 rails?

Yes, by setting UCX_MAX_RNDV_RAILS=<num-rails>. Currently up to 4 are supported.

Is it possible that each process would just use the closest device?

Yes, by UCX_MAX_RNDV_RAILS=1 each process would use a single network device according to NUMA locality.

Can I disable multi-rail?

Yes, by setting UCX_NET_DEVICES=<dev> to the single device that should be used.

Adaptive routing

Does UCX support adaptive routing fabrics?


What do I need to do to run UCX with adaptive routing?

When adaptive routing is configured on an Infiniband fabric, it is enabled per SL (IB Service Layer).
Setting UCX_IB_SL=<sl-num> will make UCX run on the given service level and utilize adaptive routing.


How to specify service level with UCX?

Setting UCX_IB_SL=<sl-num> will make UCX run on the given service level.

How to specify DSCP priority?


How to specify which address to use?

Setting UCX_IB_GID_INDEX=<num> would make UCX use the specified GID index on the RoCE port. The system command show_gids would print all available addresses and their indexes.

Working with GPU

GPU support

How UCX supports GPU?

UCX protocol operations can work with GPU memory pointers the same way as with Host memory pointers. For example, the ‘buffer’ argument passed to ucp_tag_send_nb() can be either host or GPU memory.

Which GPUs are supported?

Currently UCX supports NVIDIA GPUs by Cuda library, and AMD GPUs by ROCm library.

Which UCX APIs support GPU memory?

Currently only UCX tagged APIs, stream APIs, and active messages APIs fully support GPU memory. Remote memory access APIs, including atomic operations, have an incomplete support for GPU memory; the full support is planned for future releases.

APIGPU memory support level
Tag (ucp_tag_send_XX/ucp_tag_recv_XX)Full support
Stream (ucp_stream_send/ucp_stream_recv_XX)Full support
Active messages (ucp_am_send_XX/ucp_am_recv_data_XX)Full support
Remote memory access (ucp_put_XX/ucp_get_XX)Partial support
Atomic operations (ucp_atomic_XX)Partial support

How to run UCX with GPU support?

In order to run UCX with GPU support, you will need an application which allocates GPU memory (for example, MPI OSU benchmarks with Cuda support), and UCX compiled with GPU support. Then you can run the application as usual (for example, with MPI) and whenever GPU memory is passed to UCX, it either use GPU-direct for zero copy operations, or copy the data to/from host memory.

NOTE When specifying UCX_TLS explicitly, must also specify cuda/rocm for GPU memory support, otherwise the GPU memory will not be recognized. For example: UCX_TLS=rc,cuda or UCX_TLS=dc,rocm

I’m running UCX with GPU memory and geting a segfault, why?

Most likely UCX does not detect that the pointer is a GPU memory and tries to access it from CPU. It can happen if UCX is not compiled with GPU support, or fails to load CUDA or ROCm modules due to missing library paths or version mismatch. Please run ucx_info -d | grep cuda or ucx_info -d | grep rocm to check for UCX GPU support.

In some cases, the internal memory type cache can misdetect GPU memory as host memory, also leading to invalid memory access. This cache can be disabled by setting UCX_MEMTYPE_CACHE=n.

Why am I getting the error “provided PTX was compiled with an unsupported toolchain”?

The application is loading a cuda binary that was compiled for newer version of cuda than installed, and the failure is detected asynchronously by a Cuda API call from UCX. In order to fix the issue, you would need to install a newer cuda version or compile the cuda binaries with the appropriate -arch option to nvcc. Refer to for the appropriate -arch option to pass to nvcc.

Performance considerations

Does UCX support zero-copy for GPU memory over RDMA?

Yes. For large messages UCX can transfer GPU memory using zero-copy RDMA using rendezvous protocol. It requires a peer memory driver for the relevant GPU type to be loaded, or (starting with UCX v1.14.0) dmabuf support on the system.

NOTE: In some cases if the RDMA network device and the GPU are not on the same NUMA node, such zero-copy transfer is inefficient.

What is needed for dmabuf support?

  • UCX v1.14.0 or later.

  • Linux kernel >= 5.12 (for example, Ubuntu 22.04).

  • Cuda 11.7 or later, installed with “-m=kernel-open” flag.

NOTE: Currently UCX code assumes that dmabuf support is uniform across all available GPU devices.


Protocol selection

How can I tell which protocols and transports are being used for communication?

  • Set UCX_LOG_LEVEL=info to print basic information about transports and devices:

     $ mpirun -x UCX_LOG_LEVEL=info -np 2 --map-by node osu_bw D D
     [1645203303.393917] [host1:42:0]     ucp_context.c:1782 UCX  INFO  UCP version is 1.13 (release 0)
     [1645203303.485011] [host2:43:0]     ucp_context.c:1782 UCX  INFO  UCP version is 1.13 (release 0)
     [1645203303.701062] [host1:42:0]          parser.c:1918 UCX  INFO  UCX_* env variable: UCX_LOG_LEVEL=info
     [1645203303.758427] [host2:43:0]          parser.c:1918 UCX  INFO  UCX_* env variable: UCX_LOG_LEVEL=info
     [1645203303.759862] [host2:43:0]      ucp_worker.c:1877 UCX  INFO  ep_cfg[2]: tag(self/memory0 knem/memory cuda_copy/cuda rc_mlx5/mlx5_0:1)
     [1645203303.760167] [host1:42:0]      ucp_worker.c:1877 UCX  INFO  ep_cfg[2]: tag(self/memory0 knem/memory cuda_copy/cuda rc_mlx5/mlx5_0:1)
     # MPI_Init() took 500.788 msec
     # OSU MPI-CUDA Bandwidth Test v5.6.2
     # Send Buffer on DEVICE (D) and Receive Buffer on DEVICE (D)
     # Size    Bandwidth (MB/s)
     [1645203303.805848] [host2:43:0]      ucp_worker.c:1877 UCX  INFO  ep_cfg[3]: tag(rc_mlx5/mlx5_0:1)
     [1645203303.873362] [host1:42:a]      ucp_worker.c:1877 UCX  INFO  ep_cfg[3]: tag(rc_mlx5/mlx5_0:1)
  • When using protocols v2, set UCX_PROTO_INFO=y for detailed information:

    $ mpirun -x UCX_PROTO_ENABLE=y -x UCX_PROTO_INFO=y -np 2 --map-by node osu_bw D D
    [1645027038.617078] [host1:42:0]   +---------------+---------------------------------------------------------------------------------------------------+
    [1645027038.617101] [host1:42:0]   | mpi ep_cfg[2] | tagged message by ucp_tag_send*() from host memory                                                |
    [1645027038.617104] [host1:42:0]   +---------------+--------------------------------------------------+------------------------------------------------+
    [1645027038.617107] [host1:42:0]   |       0..8184 | eager short                                      | self/memory0                                   |
    [1645027038.617110] [host1:42:0]   |    8185..9806 | eager copy-in copy-out                           | self/memory0                                   |
    [1645027038.617112] [host1:42:0]   |     9807..inf | (?) rendezvous zero-copy flushed write to remote | 55% on knem/memory and 45% on rc_mlx5/mlx5_0:1 |
    [1645027038.617115] [host1:42:0]   +---------------+--------------------------------------------------+------------------------------------------------+
    [1645027038.617307] [host2:43:0]   +---------------+---------------------------------------------------------------------------------------------------+
    [1645027038.617337] [host2:43:0]   | mpi ep_cfg[2] | tagged message by ucp_tag_send*() from host memory                                                |
    [1645027038.617341] [host2:43:0]   +---------------+--------------------------------------------------+------------------------------------------------+
    [1645027038.617344] [host2:43:0]   |       0..8184 | eager short                                      | self/memory0                                   |
    [1645027038.617348] [host2:43:0]   |    8185..9806 | eager copy-in copy-out                           | self/memory0                                   |
    [1645027038.617351] [host2:43:0]   |     9807..inf | (?) rendezvous zero-copy flushed write to remote | 55% on knem/memory and 45% on rc_mlx5/mlx5_0:1 |
    [1645027038.617354] [host2:43:0]   +---------------+--------------------------------------------------+------------------------------------------------+
    # MPI_Init() took 1479.255 msec
    # OSU MPI-CUDA Bandwidth Test v5.6.2
    # Size    Bandwidth (MB/s)
    [1645027038.674035] [host2:43:0]   +---------------+--------------------------------------------------------------+
    [1645027038.674043] [host2:43:0]   | mpi ep_cfg[3] | tagged message by ucp_tag_send*() from host memory           |
    [1645027038.674047] [host2:43:0]   +---------------+-------------------------------------------+------------------+
    [1645027038.674049] [host2:43:0]   |       0..2007 | eager short                               | rc_mlx5/mlx5_0:1 |
    [1645027038.674052] [host2:43:0]   |    2008..8246 | eager zero-copy copy-out                  | rc_mlx5/mlx5_0:1 |
    [1645027038.674055] [host2:43:0]   |   8247..17297 | eager zero-copy copy-out                  | rc_mlx5/mlx5_0:1 |
    [1645027038.674058] [host2:43:0]   |    17298..inf | (?) rendezvous zero-copy read from remote | rc_mlx5/mlx5_0:1 |
    [1645027038.674060] [host2:43:0]   +---------------+-------------------------------------------+------------------+
    [1645027038.680982] [host2:43:0]   +---------------+------------------------------------------------------------------------------------+
    [1645027038.680993] [host2:43:0]   | mpi ep_cfg[3] | tagged message by ucp_tag_send*() from cuda/GPU0                                   |
    [1645027038.680996] [host2:43:0]   +---------------+-----------------------------------------------------------------+------------------+
    [1645027038.680999] [host2:43:0]   |       0..8246 | eager zero-copy copy-out                                        | rc_mlx5/mlx5_0:1 |
    [1645027038.681001] [host2:43:0]   |  8247..811555 | eager zero-copy copy-out                                        | rc_mlx5/mlx5_0:1 |
    [1645027038.681004] [host2:43:0]   |   811556..inf | (?) rendezvous pipeline cuda_copy, fenced write to remote, cuda | rc_mlx5/mlx5_0:1 |
    [1645027038.681007] [host2:43:0]   +---------------+-----------------------------------------------------------------+------------------+
    [1645027038.693843] [host1:42:a]   +---------------+--------------------------------------------------------------+
    [1645027038.693856] [host1:42:a]   | mpi ep_cfg[3] | tagged message by ucp_tag_send*() from host memory           |
    [1645027038.693858] [host1:42:a]   +---------------+-------------------------------------------+------------------+
    [1645027038.693861] [host1:42:a]   |       0..2007 | eager short                               | rc_mlx5/mlx5_0:1 |
    [1645027038.693863] [host1:42:a]   |    2008..8246 | eager zero-copy copy-out                  | rc_mlx5/mlx5_0:1 |
    [1645027038.693865] [host1:42:a]   |   8247..17297 | eager zero-copy copy-out                  | rc_mlx5/mlx5_0:1 |
    [1645027038.693867] [host1:42:a]   |    17298..inf | (?) rendezvous zero-copy read from remote | rc_mlx5/mlx5_0:1 |
    [1645027038.693869] [host1:42:a]   +---------------+-------------------------------------------+------------------+