ML Frameworks Interoperability Cheat Sheet

Introduction

This notebook is an appendix to the Machine Learning Frameworks Interoperability blog series. It aims to be a lookup table when converting data between the following ML frameworks: pandas, NumPy, RAPIDS cuDF, CuPy, JAX, Numba, TensorFlow, PyTorch and MXNet.

In order to make it easier to have all those libraries up and running, we have used the RAPIDS 0.18 container on Ubuntu 18.04 as a base container, and then added a few missing libraries via pip install.

We encourage you to run this notebook on the latest RAPIDS container. Alternatively, you can also set up a conda virtual environment. In both cases, please visit RAPIDS release selector for installation details.

Finally, please find below the details of the container we used when creating this notebook. For reproducibility purposes, please use the following command:

[email protected]:~$ docker pull docker pull nvcr.io/nvidia/rapidsai/rapidsai:0.18-cuda11.0-runtime-ubuntu18.04
[email protected]:~$ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 \
                  -v ~:/rapids/notebooks/host \
                  nvcr.io/nvidia/rapidsai/rapidsai:0.18-cuda11.0-runtime-ubuntu18.04

Install missing dependencies

In [1]:
# Jax install
print("Installing Jax")
!pip -q install --upgrade jax==0.2.10 jaxlib==0.1.60+cuda110 -f https://storage.googleapis.com/jax-releases/jax_releases.html
    
# PyTorch install
print("Installing PyTorch")
!pip -q install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
    
# MXNet install
print("Installing MXNet")
!pip -q install mxnet-cu110

# TensorFlow install
print("Installing TensorFlow")
!pip -q install tensorflow==2.4.1
Installing Jax
Installing PyTorch
Installing MXNet
Installing TensorFlow
In [2]:
import cudf
import cupy as cp
import jax
import jax.dlpack
import jax.numpy as jnp
import mxnet as mx
import numba as nb
import numpy as np
import pandas as pd
import tensorflow as tf
import torch
import torch.utils.dlpack

Index↑↑↑

PandasNumpycuDFcuPYJAXNumbaTensorFlowPyTorchMXNet
Pandas n/a code code code code code code code code
Numpy code n/a code code code code code code code
cuDF code code n/a code code code code code code
cuPY code code code n/a code code code code code
JAX code code code code n/a code code code code
Numba code code code code code n/a code code code
TensorFlow code code code code code code n/a code code
PyTorch code code code code code code code n/a code
MXNet code code code code code code code code n/a

From Pandas to Numpy↑↑↑

In [3]:
# Option 1: Pandas DataFrame to a Numpy ndarray
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = src.to_numpy()

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 3]
 [2 4]]
In [4]:
# Option 2: Convert a Pandas DataFrame to a Numpy ndarray
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = src.values # "to_numpy()" is preferred to "values".

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 3]
 [2 4]]
In [5]:
# Option 3: Convert a Pandas DataFrame to a Numpy recarray
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]}, index=['a', 'b'])
dst = src.to_records()

print(type(dst), "\n", dst)
<class 'numpy.recarray'> 
 [('a', 1, 3) ('b', 2, 4)]

From Pandas to cuDF↑↑↑

In [6]:
# Option 1: Convert a Pandas DataFrame to a cuDF DataFrame
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = cudf.DataFrame(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    x  y
0  1  3
1  2  4
In [7]:
# Option 2: Convert a Pandas DataFrame to a cuDF DataFrame
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = cudf.from_pandas(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    x  y
0  1  3
1  2  4

From Pandas to CuPy↑↑↑

In [8]:
# Option 1: Pandas DataFrame to a CuPy ndarray
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = cp.asarray(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 3]
 [2 4]]
In [9]:
# Option 2: Pandas DataFrame to a CuPy ndarray
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = cp.array(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 3]
 [2 4]]

From Pandas to Jax↑↑↑

Jax does not natively support Pandas DataFrames. Nevertheless, it supports TensorFlow TensorSliceDatasets, which can be generated from Pandas DataFrames.

See also: Pandas → NumpyJax.

In [10]:
# Convert a Pandas DataFrame to a TensorFlow TensorSliceDataset
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = tf.data.Dataset.from_tensor_slices(src)
    
print(type(dst), "\n", dst)
<class 'tensorflow.python.data.ops.dataset_ops.TensorSliceDataset'> 
 <TensorSliceDataset shapes: (2,), types: tf.int64>

From Pandas to Numba↑↑↑

Numba does not natively support Pandas DataFrames. Nevertheless, a Pandas DataFrame can be converted to other Numba-supported formats:

From Pandas to TensorFlow↑↑↑

In [11]:
# Convert a Pandas DataFrame to a TensorFlow TensorSliceDataset
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = tf.data.Dataset.from_tensor_slices(src)

print(type(dst), "\n", dst)
<class 'tensorflow.python.data.ops.dataset_ops.TensorSliceDataset'> 
 <TensorSliceDataset shapes: (2,), types: tf.int64>

From Pandas to PyTorch↑↑↑

PyTorch does not natively support Pandas DataFrames. Nevertheless, it supports Numpy ndarrays, which can be generated from Pandas DataFrames.

See: Pandas → NumpyPyTorch.

From Pandas to MXNet↑↑↑

In [12]:
# Convert a Pandas DataFrame to an MXNet NDArray
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = mx.nd.array(src)

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1. 3.]
 [2. 4.]]
<NDArray 2x2 @cpu(0)>

From Numpy to Pandas↑↑↑

In [13]:
# Convert a Numpy ndarray to a Pandas DataFrame
src = np.array([[1, 2], [3, 4]])
dst = pd.DataFrame(src)

print(type(dst), "\n", dst)
<class 'pandas.core.frame.DataFrame'> 
    0  1
0  1  2
1  3  4

From Numpy to cuDF↑↑↑

In [14]:
# Option 1: Numpy ndarray to a Pandas DataFrame
src = np.array([[1, 2], [3, 4]])
dst = cudf.DataFrame(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  3  4
In [15]:
# Option 2: Numpy recarray to a Pandas DataFrame
src = np.rec.array([(1, 2), (3, 4)], names=['a', 'b'])
dst = cudf.DataFrame.from_records(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    a  b
0  1  2
1  3  4

From Numpy to CuPy↑↑↑

In [16]:
# Option 1: Numpy ndarray to a CuPy ndarray
src = np.array([[1, 2], [3, 4]])
dst = cp.asarray(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]
In [17]:
# Option 2: Numpy ndarray to a CuPy ndarray
src = np.array([[1, 2], [3, 4]])
dst = cp.array(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]

From Numpy to JAX↑↑↑

In [18]:
# Option 1: Numpy ndarray to a JAX DeviceArray
src = np.array([[1, 2], [3, 4]])
dst = jnp.array(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]
In [19]:
# Option 2: Numpy ndarray to a JAX DeviceArray
src = np.array([[1, 2], [3, 4]])
dst = jnp.asarray(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]
In [20]:
# Option 2: Numpy ndarray to a JAX DeviceArray
src = np.array([[1, 2], [3, 4]])
dst = jax.device_put(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]

From Numpy to Numba↑↑↑

Numba natively supports Numpy ndarrays. Alternatively, a Numba `DeviceNDArray can be created from a Numpy ndarray.

In [21]:
# Convert a Numpy ndarray to a Numba DeviceNDArray
src = np.array([[1, 2], [3, 4]])
dst = nb.cuda.to_device(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd064752250>

From Numpy to TensorFlow↑↑↑

In [22]:
# Convert a Numpy ndarray to a TensorFlow TensorSliceDataset
src = np.array([[1, 2], [3, 4]])
dst = tf.data.Dataset.from_tensor_slices(src)

print(type(dst), "\n", dst)
<class 'tensorflow.python.data.ops.dataset_ops.TensorSliceDataset'> 
 <TensorSliceDataset shapes: (2,), types: tf.int64>

From Numpy to PyTorch↑↑↑

In [23]:
# Convert a Numpy ndarray to a PyTorch Tensor
src = np.array([[1, 2], [3, 4]])
dst = torch.tensor(src)

print(type(dst), "\n", dst)
<class 'torch.Tensor'> 
 tensor([[1, 2],
        [3, 4]])

From Numpy to MXNet↑↑↑

In [24]:
# Convert a Numpy ndarray to an MXNet NDArray
src = np.array([[1, 2], [3, 4]])
dst = mx.nd.array(src)

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1. 2.]
 [3. 4.]]
<NDArray 2x2 @cpu(0)>

From cuDF to Pandas↑↑↑

In [25]:
# Convert a cuDF DataFrame to a Pandas DataFrame
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = src.to_pandas()

print(type(dst), "\n", dst)
<class 'pandas.core.frame.DataFrame'> 
    x  y
0  1  3
1  2  4

From cuDF to Numpy↑↑↑

In [26]:
# Option 1: Convert a cuDF DataFrame to a Numpy ndarray
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = src.as_matrix()

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 3]
 [2 4]]
In [27]:
# Option 2: Convert a cuDF DataFrame to a Numpy recarray
src = pd.DataFrame({'x': [1, 2], 'y': [3, 4]}, index=['a', 'b'])
dst = src.to_records()

print(type(dst), "\n", dst)
<class 'numpy.recarray'> 
 [('a', 1, 3) ('b', 2, 4)]

From cuDF to CuPy↑↑↑

In [28]:
# Option 1: Convert a cuDF DataFrame to a CuPy ndarray
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = cp.asarray(src.as_gpu_matrix())

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 3]
 [2 4]]
In [29]:
# Option 2: Convert a cuDF DataFrame to a CuPy ndarray
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = cp.fromDlpack(src.to_dlpack())

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 3]
 [2 4]]

From cuDF to JAX↑↑↑

In [30]:
# Convert a cuDF DataFrame to a JAX DeviceArray
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = jax.dlpack.from_dlpack(src.to_dlpack())

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 3]
 [2 4]]

From cuDF to Numba↑↑↑

In [31]:
# Convert a cuDF DataFrame to a Numba DeviceNDArray
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = src.as_gpu_matrix()

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd064772fd0>

From cuDF to TensorFlow↑↑↑

In [32]:
# Option 1: Convert a cuDF DataFrame to a TensorFlow EagerTensor
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = tf.experimental.dlpack.from_dlpack(cp.fromDlpack(src.to_dlpack()).T.toDlpack())

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int64)
In [33]:
# Option 2: Convert a cuDF DataFrame to a TensorFlow EagerTensor
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = tf.experimental.dlpack.from_dlpack(cp.asarray(src.as_gpu_matrix()).T.toDlpack()) 

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int64)

From cuDF to PyTorch↑↑↑

In [34]:
# Convert a cuDF DataFrame to a PyTorch Tensor
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = torch.utils.dlpack.from_dlpack(src.to_dlpack())

print(type(dst), "\n", dst)
<class 'torch.Tensor'> 
 tensor([[1, 3],
        [2, 4]], device='cuda:0')

From cuDF to MXNet↑↑↑

In [35]:
# Convert a cuDF DataFrame to an MXNet NDArray
src = cudf.DataFrame({'x': [1, 2], 'y': [3, 4]})
dst = mx.nd.from_dlpack(cp.fromDlpack(src.to_dlpack()).T.toDlpack())

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1 2]
 [3 4]]
<NDArray 2x2 @gpu(0)>

From CuPy to Pandas↑↑↑

In [36]:
# Option 1: Convert a CuPy ndarray to a Pandas DataFrame 
src = cp.array([[1, 2], [3, 4]])
dst = pd.DataFrame(src)

print(type(dst), "\n", dst)
<class 'pandas.core.frame.DataFrame'> 
    0  1
0  1  2
1  3  4
In [37]:
# Option 2: Convert a CuPy ndarray to a Pandas DataFrame 
src = cp.array([[1, 2], [3, 4]])
dst = pd.DataFrame(cp.asnumpy(src))

print(type(dst), "\n", dst)
<class 'pandas.core.frame.DataFrame'> 
    0  1
0  1  2
1  3  4
In [38]:
# Option 3: Convert a CuPy ndarray to a Pandas DataFrame 
src = cp.array([[1, 2], [3, 4]])
dst = pd.DataFrame(cp.ndarray.get(src))

print(type(dst), "\n", dst)
<class 'pandas.core.frame.DataFrame'> 
    0  1
0  1  2
1  3  4

From CuPy to Numpy↑↑↑

In [39]:
# Option 1: Convert a CuPy ndarray to a Numpy ndarray 
src = cp.array([[1, 2], [3, 4]])
dst = cp.asnumpy(src)

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]
In [40]:
# Option 2: Convert a CuPy ndarray to a Numpy ndarray 
src = cp.array([[1, 2], [3, 4]])
dst = cp.ndarray.get(src)

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]

From CuPy to cuDF↑↑↑

In [41]:
# Option 1: Convert a CuPy ndarray to a cuDF DataFrame 
src = cp.array([[1, 2], [3, 4]])
dst = cudf.DataFrame(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  3  4
In [42]:
# Option 2: Convert a CuPy ndarray to a cuDF DataFrame 
src = cp.array([[1, 2], [3, 4]])
dst = cudf.from_dlpack(src.toDlpack())

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  2  3

From CuPy to JAX↑↑↑

In [43]:
# Convert a CuPy ndarray to a JAX DeviceArray
src = cp.array([[1, 2], [3, 4]])
dst = jax.dlpack.from_dlpack(src.toDlpack())

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]

From CuPy to Numba↑↑↑

In [44]:
# Option 1: Convert a CuPy ndarray to a Numba DeviceNDArray
src = cp.array([[1, 2], [3, 4]])
dst = nb.cuda.as_cuda_array(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd01bfed5d0>
In [45]:
# Option 2: Convert a CuPy ndarray to a Numba DeviceNDArray
src = cp.array([[1, 2], [3, 4]])
dst = nb.cuda.to_device(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd01bfed110>

From CuPy to TensorFlow↑↑↑

In [46]:
# Convert a CuPy ndarray to a TensorFlow EagerTensor
src = cp.array([[1, 2], [3, 4]])
dst = tf.experimental.dlpack.from_dlpack(src.toDlpack())

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int64)

From CuPy to PyTorch↑↑↑

In [47]:
# Convert a CuPy ndarray to a PyTorch Tensor
src = cp.array([[1, 2], [3, 4]])
dst = torch.utils.dlpack.from_dlpack(src.toDlpack())

print(type(dst), "\n", dst)
<class 'torch.Tensor'> 
 tensor([[1, 2],
        [3, 4]], device='cuda:0')

From CuPy to MXNet↑↑↑

In [48]:
# Convert a CuPy ndarray to an MXNet NDArray
src = cp.array([[1, 2], [3, 4]])
dst = mx.nd.from_dlpack(src.toDlpack())

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1 2]
 [3 4]]
<NDArray 2x2 @gpu(0)>

From JAX to Pandas↑↑↑

In [49]:
# Convert a JAX DeviceArray to a Pandas DataFrame
src = jnp.array([[1, 2], [3, 4]])
dst = pd.DataFrame(src)

print(type(dst), "\n", dst)
<class 'pandas.core.frame.DataFrame'> 
    0  1
0  1  2
1  3  4

From JAX to Numpy↑↑↑

In [50]:
# Option 1: Convert a JAX DeviceArray to a Numpy ndarray
src = jnp.array([[1, 2], [3, 4]])
dst = np.asarray(src)

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]
In [51]:
# Option 2: Convert a JAX DeviceArray to a Numpy ndarray
src = jnp.array([[1, 2], [3, 4]])
dst = np.array(src)

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]

From JAX to cuDF↑↑↑

In [52]:
# Convert a JAX DeviceArray to a cuDF DataFrame
src = jnp.array([[1, 2], [3, 4]])
dst = cudf.from_dlpack(jax.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  2  3

From JAX to CuPy↑↑↑

In [53]:
# Convert a JAX DeviceArray to a CuPY ndarray
src = jnp.array([[1, 2], [3, 4]])
dst = cp.fromDlpack(jax.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]

From JAX to Numba↑↑↑

In [54]:
# Option 1: Convert a JAX DeviceArray to a Numba DeviceNDArray
src = jnp.array([[1, 2], [3, 4]])
dst = nb.cuda.as_cuda_array(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd102e11710>
In [55]:
# Option 2: Convert a JAX DeviceArray to a Numba DeviceNDArray
src = jnp.array([[1, 2], [3, 4]])
dst = nb.cuda.to_device(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd01bff2350>

From JAX to TensorFlow↑↑↑

In [56]:
# Convert a JAX DeviceArray to a TensorFlow EagerTensor
src = jnp.array([[1, 2], [3, 4]])
dst = tf.experimental.dlpack.from_dlpack(jax.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32)

From JAX to PyTorch↑↑↑

In [57]:
# Convert a JAX DeviceArray to a PyTorch Tensor
src = jnp.array([[1, 2], [3, 4]])
dst = torch.utils.dlpack.from_dlpack(jax.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'torch.Tensor'> 
 tensor([[1, 2],
        [3, 4]], device='cuda:0', dtype=torch.int32)

From JAX to MXNet↑↑↑

In [58]:
# Convert a JAX DeviceArray to an MXNet NDArray
src = jnp.array([[1, 2], [3, 4]])
dst = mx.nd.from_dlpack(jax.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1 2]
 [3 4]]
<NDArray 2x2 @gpu(0)>

From Numba to Pandas↑↑↑

Pandas does not natively support Numba DeviceNDArrays. Nevertheless, it supports Numpy ndarrays, which can be generated from Numba DeviceNDArrays:

From Numba to Numpy↑↑↑

In [59]:
# Convert a GPU-based Numba DeviceNDArray to a Numpy ndarray
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = src.copy_to_host()

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]

From Numba to cuDF↑↑↑

In [60]:
# Convert a GPU-based Numba DeviceNDArray to a cuDF DataFrame
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = cudf.DataFrame(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  3  4
In [61]:
# Convert a GPU-based Numba DeviceNDArray to a cuDF DataFrame
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = cudf.DataFrame(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  3  4

From Numba to CuPy↑↑↑

In [62]:
# Option 1: Convert a GPU-based Numba DeviceNDArray to a CuPy ndarray
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = cp.asarray(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]
In [63]:
# Option 2: Convert a GPU-based Numba DeviceNDArray to a CuPy ndarray
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = cp.array(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]

From Numba to JAX↑↑↑

In [64]:
# Option 1: Convert a GPU-based Numba DeviceNDArray to a JAX DeviceArray
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = jnp.asarray(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1. 2.]
 [3. 4.]]
In [65]:
# Option 2: Convert a GPU-based Numba DeviceNDArray to a JAX DeviceArray
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = jnp.array(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1. 2.]
 [3. 4.]]

From Numba to TensorFlow↑↑↑

In [66]:
# Convert a GPU-based Numba DeviceNDArray to a TensorFlow TensorSliceDataset
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = tf.data.Dataset.from_tensor_slices(src)

print(type(dst), "\n", dst)
<class 'tensorflow.python.data.ops.dataset_ops.TensorSliceDataset'> 
 <TensorSliceDataset shapes: (2,), types: tf.float64>

From Numba to PyTorch↑↑↑

In [67]:
# Convert a GPU-based Numba DeviceNDArray to a PyTorch Tensor
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = torch.tensor(src)

print(type(dst), "\n", dst)
<class 'torch.Tensor'> 
 tensor([[1, 2],
        [3, 4]])

From Numba to MXNet↑↑↑

In [68]:
# Convert a GPU-based Numba DeviceNDArray to an MXNet NDArray
src = nb.cuda.to_device([[1, 2], [3, 4]])
dst = mx.nd.array(src, ctx=mx.gpu())

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1. 2.]
 [3. 4.]]
<NDArray 2x2 @gpu(0)>

From TensorFlow to Pandas↑↑↑

Pandas does not natively support TensorFlow EagerTensors. Nevertheless, it supports Numpy ndarrays, which can be generated from TensorFlow EagerTensors:

From TensorFlow to Numpy↑↑↑

In [69]:
# Option 1: Convert a TensorFlow EagerTensor to a Numpy ndarray
src = tf.convert_to_tensor([[1, 2], [3, 4]])
dst = np.asarray(src)

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]
In [70]:
# Option 2: Convert a TensorFlow EagerTensor to a Numpy ndarray
src = tf.convert_to_tensor([[1, 2], [3, 4]])
dst = np.array(src)

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]
In [71]:
# Option 3: Convert a TensorFlow EagerTensor to a Numpy ndarray
src = tf.convert_to_tensor([[1, 2], [3, 4]])
dst = src.numpy()

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]

From TensorFlow to cuDF↑↑↑

In [72]:
# Convert a TensorFlow EagerTensor to a cuDF DataFrame
src = tf.convert_to_tensor([[1, 2], [3, 4]])
dst = cudf.from_dlpack(tf.experimental.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  2  3

From TensorFlow to CuPy↑↑↑

In [73]:
# Convert a TensorFlow EagerTensor to a CuPy ndarray
src = tf.math.add(tf.zeros([2, 2]), [[1, 2], [3, 4]])
dst = cp.fromDlpack(tf.experimental.dlpack.to_dlpack(src))

print(src.backing_device)
print(type(dst), "\n", dst)
/job:localhost/replica:0/task:0/device:GPU:0
<class 'cupy.core.core.ndarray'> 
 [[1. 2.]
 [3. 4.]]
In [74]:
# Convert a TensorFlow EagerTensor to a CuPy ndarray
src = tf.math.add(tf.zeros([2, 2]), [[1, 2], [3, 4]])
dst = cp.fromDlpack(tf.experimental.dlpack.to_dlpack(src))

print(src.backing_device)
print(type(dst), "\n", dst)
/job:localhost/replica:0/task:0/device:GPU:0
<class 'cupy.core.core.ndarray'> 
 [[1. 2.]
 [3. 4.]]

From TensorFlow to JAX↑↑↑

In [75]:
# Option 1: Convert a GPU-based TensorFlow EagerTensor to a GPU-based JAX DeviceArray
src = tf.math.add(tf.zeros([2, 2]), [[1, 2], [3, 4]])
dst = jax.dlpack.from_dlpack(tf.experimental.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1. 2.]
 [3. 4.]]
In [76]:
# Option 2: Convert a CPU or GPU-based TensorFlow EagerTensor to a CPU-based JAX DeviceArray
src = tf.convert_to_tensor([[1, 2], [3, 4]])
dst = jnp.asarray(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]
In [77]:
# Option 3: Convert a CPU or GPU-based TensorFlow EagerTensor to a CPU-based JAX DeviceArray
src = tf.convert_to_tensor([[1, 2], [3, 4]])
dst = jnp.array(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]

From TensorFlow to Numba↑↑↑

In [78]:
# Convert a TensorFlow EagerTensor to a Numba DeviceNDArray
src = tf.convert_to_tensor([[1, 2], [3, 4]])
dst = nb.cuda.to_device(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd01bf89290>

From TensorFlow to PyTorch↑↑↑

In [79]:
# Convert a TensorFlow EagerTensor to a PyTorch Tensor
src = tf.math.add(tf.zeros([2, 2]), [[1, 2], [3, 4]])
dst = torch.utils.dlpack.from_dlpack(tf.experimental.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'torch.Tensor'> 
 tensor([[1., 2.],
        [3., 4.]], device='cuda:0')

From TensorFlow to MXNet↑↑↑

In [80]:
# Convert a TensorFlow EagerTensor to a Numba DeviceNDArray
src = tf.math.add(tf.zeros([2, 2]), [[1, 2], [3, 4]])
dst = mx.nd.from_dlpack(tf.experimental.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1. 2.]
 [3. 4.]]
<NDArray 2x2 @gpu(0)>

From PyTorch to Pandas↑↑↑

In [81]:
# Convert a PyTorch Tensor to a Pandas DataFrame
src = torch.tensor([[1, 2], [3, 4]])
dst = pd.DataFrame(src).astype("int64")

print(type(dst), "\n", dst)
<class 'pandas.core.frame.DataFrame'> 
    0  1
0  1  2
1  3  4

From PyTorch to Numpy↑↑↑

In [82]:
# Option 1: Convert a CPU-based PyTorch Tensor to a Numpy ndarray
src = torch.tensor([[1, 2], [3, 4]])
dst = src.numpy()

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]
In [83]:
# Option 2: Convert a GPU-based PyTorch Tensor to a Numpy ndarray
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = src.cpu().numpy()

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]

From PyTorch to cuDF↑↑↑

In [84]:
# Convert a PyTorch Tensor to a cuDF DataFrame
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = cudf.DataFrame(src)

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  2
1  3  4

From PyTorch to CuPy↑↑↑

In [85]:
# Option 1: Convert a CPU or GPU-based PyTorch Tensor to a CuPy ndarray
src = torch.tensor([[1, 2], [3, 4]], dtype=torch.int32)
dst = cp.asarray(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]
In [86]:
# Option 2: Convert a CPU or GPU-based PyTorch Tensor to a CuPy ndarray
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = cp.array(src)

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]
In [87]:
# Option 3: Convert a GPU-based PyTorch Tensor to a CuPy ndarray
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = cp.fromDlpack(torch.utils.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]

From PyTorch to JAX↑↑↑

In [88]:
# Option 1: Convert a CPU-based PyTorch Tensor to a JAX DeviceArray
src = torch.tensor([[1, 2], [3, 4]], dtype=torch.int32)
dst = jnp.asarray(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]
In [89]:
# Option 2: Convert a CPU-based PyTorch Tensor to a JAX DeviceArray
src = torch.tensor([[1, 2], [3, 4]], dtype=torch.int32)
dst = jnp.array(src)

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]
In [90]:
# Option 3: Convert a GPU-based PyTorch Tensor to a JAX DeviceArray
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = jax.dlpack.from_dlpack(torch.utils.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]

From PyTorch to Numba↑↑↑

In [91]:
# Option 1: Convert a CPU or GPU-based PyTorch Tensor to a Numba DeviceNDArray
src = torch.tensor([[1, 2], [3, 4]], dtype=torch.int32)
dst = nb.cuda.to_device(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd01bf91510>
In [92]:
# Option 2: Convert a GPU-based PyTorch Tensor to a Numba DeviceNDArray
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = nb.cuda.as_cuda_array(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd06478e310>

From PyTorch to TensorFlow↑↑↑

In [93]:
# Option 1: Convert a CPU-based PyTorch Tensor to a TensorFlow EagerTensor
src = torch.tensor([[1, 2], [3, 4]], dtype=torch.int32)
dst = tf.convert_to_tensor(src)

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32)
In [94]:
# Option 2: Convert a CPU or GPU-based PyTorch Tensor to a TensorFlow EagerTensor
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = tf.experimental.dlpack.from_dlpack(torch.utils.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32)

From PyTorch to MXNet↑↑↑

In [95]:
# Option 1: Convert a CPU-based PyTorch Tensor to a TensorFlow EagerTensor
src = torch.tensor([[1, 2], [3, 4]], dtype=torch.int32)
dst = mx.nd.array(src)

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1. 2.]
 [3. 4.]]
<NDArray 2x2 @cpu(0)>
In [96]:
# Option 2: Convert a CPU or GPU-based PyTorch Tensor to a TensorFlow EagerTensor
src = torch.cuda.IntTensor([[1, 2], [3, 4]])
dst = mx.nd.from_dlpack(torch.utils.dlpack.to_dlpack(src))

print(type(dst), "\n", dst)
<class 'mxnet.ndarray.ndarray.NDArray'> 
 
[[1 2]
 [3 4]]
<NDArray 2x2 @gpu(0)>

From MXNet to Pandas↑↑↑

Pandas does not natively support MXNet NDArrays. Nevertheless, it supports Numpy ndarrays, which can be generated from MXNet NDArrays.

See: MXNet → NumpyPandas.

From MXNet to Numpy↑↑↑

In [97]:
# Convert a CPU or GPU-based MXNet NDArray to a Numpy ndarray
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = src.asnumpy()

print(type(dst), "\n", dst)
<class 'numpy.ndarray'> 
 [[1 2]
 [3 4]]

From MXNet to cuDF↑↑↑

In [98]:
# Option 1: Convert a CPU or GPU-based MXNet NDArray to a cuDF DataFrame
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = cudf.from_dlpack(src.to_dlpack_for_write())

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  3
1  2  4
In [99]:
# Option 2: Convert a CPU or GPU-based MXNet NDArray to a cuDF DataFrame
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = cudf.from_dlpack(src.to_dlpack_for_read())

print(type(dst), "\n", dst)
<class 'cudf.core.dataframe.DataFrame'> 
    0  1
0  1  3
1  2  4

From MXNet to CuPy↑↑↑

Cupy does not natively support CPU-based MXNet NDArrays. Nevertheless, it supports Numpy ndarrays, which can be generated from MXNet NDArrays.

See: MXNet → NumpyCuPy.

In [100]:
# Option 1: Convert a GPU-based MXNet NDArray to a CuPy ndarray
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = cp.fromDlpack(src.to_dlpack_for_write())

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]
In [101]:
# Option 2: Convert a GPU-based MXNet NDArray to a CuPy ndarray
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = cp.fromDlpack(src.to_dlpack_for_read())

print(type(dst), "\n", dst)
<class 'cupy.core.core.ndarray'> 
 [[1 2]
 [3 4]]

From MXNet to JAX↑↑↑

JAX does not natively support CPU-based MXNet NDArrays. Nevertheless, it supports Numpy ndarrays, which can be generated from MXNet NDArrays.

See: MXNet → NumpyJAX.

In [102]:
# Option 1: Convert a GPU-based MXNet NDArray to a CuPy ndarray
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = jax.dlpack.from_dlpack(src.to_dlpack_for_write())

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]
In [103]:
# Option 2: Convert a GPU-based MXNet NDArray to a CuPy ndarray
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = jax.dlpack.from_dlpack(src.to_dlpack_for_read())

print(type(dst), "\n", dst)
<class 'jax.interpreters.xla._DeviceArray'> 
 [[1 2]
 [3 4]]

From MXNet to Numba↑↑↑

In [104]:
# Convert a CPU or GPU-based MXNet NDArray to a Numba DeviceNDArray
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = nb.cuda.to_device(src)

print(type(dst), "\n", dst)
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'> 
 <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7fd0647ca850>

From MXNet to TensorFlow↑↑↑

In [105]:
# Option 1 - Convert a CPU or GPU-based MXNet NDArray to a TensorFlow EagerTensor
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = tf.experimental.dlpack.from_dlpack(src.to_dlpack_for_write())

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32)
In [106]:
# Option 2 - Convert a CPU or GPU-based MXNet NDArray to a TensorFlow EagerTensor
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = tf.experimental.dlpack.from_dlpack(src.to_dlpack_for_read())

print(type(dst), "\n", dst)
<class 'tensorflow.python.framework.ops.EagerTensor'> 
 tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32)

From MXNet to PyTorch↑↑↑

In [107]:
# Convert a CPU or GPU-based MXNet NDArray to a PyTorch Tensor
src = mx.nd.array([[1, 2], [3, 4]], dtype='int32', ctx=mx.gpu())
dst = torch.utils.dlpack.from_dlpack(src.to_dlpack_for_write())

print(type(dst), "\n", dst)
<class 'torch.Tensor'> 
 tensor([[1, 2],
        [3, 4]], device='cuda:0', dtype=torch.int32)