Skip to content

Conversation

@BarclayII
Copy link
Collaborator

No description provided.

@zzhang-cn zzhang-cn merged commit 572b289 into dmlc:tensor May 8, 2018
VoVAllen added a commit that referenced this pull request Oct 24, 2018
hetong007 added a commit that referenced this pull request Aug 6, 2020
* PPIDataset

* Revert "PPIDataset"

This reverts commit 264bd0c.

* update data rst

* update data doc and docstring

* API doc rst for dataset

* docstring

* update api doc

* add url format

* update docstring

* update citation graph

* update knowledge graph

* update gc datasets

* fix index

* Rst fix (#3)

* Fix syntax

* syntax

* update docstring

* update doc (#4)

* final update

* fix rdflib

* fix rdf

Co-authored-by: HuXiangkun <[email protected]>
Co-authored-by: Ubuntu <[email protected]>
Co-authored-by: xiang song(charlie.song) <[email protected]>
VoVAllen referenced this pull request in VoVAllen/dgl Nov 13, 2020
[KVstore] Implement async_pull API
Qksidmx referenced this pull request in Qksidmx/dgl Apr 25, 2022
GMNGeoffrey added a commit to GMNGeoffrey/dgl that referenced this pull request Jan 29, 2025
There are definitely still runtime issues, but this handles all of the compilation failures (requires clang-19 and bleeding-edge ROCM). Mostly straightforward. The ones that aren't.

- The AtomicFPOp was adapated from [PyTorch](https://github.com/pytorch/pytorch/blob/bef103934a25d848838a7642a8d6a2f523e7dfff/aten/src/ATen/cuda/Atomic.cuh#L39).
- The handling of legacy cusparse is something I messed up originally. The various CUDA version checks need to be in sync because the legacy version creates a transposed output that then needs to be flipped back, so I factored out a shared macro.
- There is something weird where HIP doesn't like the logging function and thinks a device function is calling a host function. I was never able to get it to work, but it doesn't seem hugely important, so I think it's ok to punt for now.

A couple of these changes might not be strictly necessary to make the build work (like adding `__HIP_DEVICE_COMPILE__` to some of the `__CUDA_ARCH__` checks) because I grabbed changes from my fully working draft by file and just reverted the obviously more complicated ones. It didn't seem worth reverting these uncomplicated ones.

Specifically, I'm using this build configuration (via a CMake preset, but dumped as a command line):

```shell
ROCM_PATH="${HOME}/rocm/opt/rocm" \
CC=/usr/bin/clang-19 \
CXX=/usr/bin/clang++-19 \
/usr/local/bin/cmake \
  -DUSE_ROCM=ON \
  -DCMAKE_PREFIX_PATH=/home/gcmn/rocm/opt/rocm \
  -DCMAKE_POSITION_INDEPENDENT_CODE=ON \
  -DROCM_WARN_TOOLCHAIN_VAR=OFF \
  -DCMAKE_EXPORT_COMPILE_COMMANDS=ON \
  -DCMAKE_INSTALL_PREFIX=/home/gcmn/src/dgl/out/install/rocm \
  -DCMAKE_C_COMPILER=/usr/bin/clang-19 \
  -DCMAKE_CXX_COMPILER=/usr/bin/clang++-19 \
  -DCMAKE_C_COMPILER_LAUNCHER=ccache \
  -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
  -DCMAKE_BUILD_TYPE=Debug \
  -DBUILD_TYPE=dev \
  -DBUILD_CPP_TEST=ON \
  -DBUILD_GRAPHBOLT=OFF \
  -DBUILD_SPARSE=ON \
  -DUSE_LIBXSMM=OFF \
  -DUSE_OPENMP=ON \
  -DPython3_FIND_VIRTUALENV=ONLY \
  -S/home/gcmn/src/dgl \
  -B/home/gcmn/src/dgl/out/build/rocm \
  -G Ninja
```
bhatturu referenced this pull request in ROCm/dgl Jul 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants