Skip to content

Fix RuntimeError when running strided gemm on CUDA devices #1567

Merged
oleksandr-pavlyk merged 1 commit intomasterfrom
fix-gemm-wg-size-computation
Feb 28, 2024
Merged

Fix RuntimeError when running strided gemm on CUDA devices #1567
oleksandr-pavlyk merged 1 commit intomasterfrom
fix-gemm-wg-size-computation

Conversation

@oleksandr-pavlyk
Copy link
Copy Markdown
Contributor

@oleksandr-pavlyk oleksandr-pavlyk commented Feb 28, 2024

Use kernel device-specific descriptor to determine maximal work-group size for this kernel.

This resolves

RuntimeError: Exceeded the number of registers available on the hardware.
        The number registers per work-group cannot exceed 65536 for this kernel on this device.
        The kernel uses 108 registers per work-item for a total of 1024 work-items per work-group.
 -54 (PI_ERROR_INVALID_WORK_GROUP_SIZE)

when running example:

import dpctl.tensor as dpt
m1 = dpt.ones((1000, 1000), dtype="i4", device="cuda")
m2 = dpt.ones((1000, 1003), dtype="i4", device="cuda")
r = dpt.matmul(m1[:, :900], m2[:900, :])
  • Have you provided a meaningful PR description?
  • Have you added a test, reproducer or referred to an issue with a reproducer?
  • Have you tested your changes locally for CPU and GPU devices?
  • Have you made sure that new changes do not introduce compiler warnings?
  • Have you checked performance impact of proposed changes?
  • If this PR is a work in progress, are you opening the PR as a draft?

…is kernel

This resolves

```
RuntimeError: Exceeded the number of registers available on the hardware.
        The number registers per work-group cannot exceed 65536 for this kernel on this device.
        The kernel uses 108 registers per work-item for a total of 1024 work-items per work-group.
 -54 (PI_ERROR_INVALID_WORK_GROUP_SIZE)
```

when running example:

```python
import dpctl.tensor as dpt
m1 = dpt.ones((1000, 1000), dtype="i4", device="cuda")
m2 = dpt.ones((1000, 1003), dtype="i4", device="cuda")
r = dpt.matmul(m1[:, :900], m2[:900, :])
```
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Feb 28, 2024

Deleted rendered PR docs from intelpython.github.com/dpctl, latest should be updated shortly. 🤞

@coveralls
Copy link
Copy Markdown
Collaborator

Coverage Status

coverage: 91.099%. remained the same
when pulling 9373733 on fix-gemm-wg-size-computation
into be4a01c on master.

@github-actions
Copy link
Copy Markdown

Array API standard conformance tests for dpctl=0.17.0dev0=py310h15de555_33 ran successfully.
Passed: 904
Failed: 2
Skipped: 94

Copy link
Copy Markdown
Collaborator

@ndgrigorian ndgrigorian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests all pass on Nvidia hardware now, LGTM!

@oleksandr-pavlyk
Copy link
Copy Markdown
Contributor Author

Verified that performance on GPU Max has not deteriorated as the result of this change. Merging now.

@oleksandr-pavlyk oleksandr-pavlyk merged commit c10cad8 into master Feb 28, 2024
@oleksandr-pavlyk oleksandr-pavlyk deleted the fix-gemm-wg-size-computation branch February 28, 2024 19:50
oleksandr-pavlyk added a commit that referenced this pull request Mar 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants