You do NOT need to compile anything if you just want to use the aimbot!
Precompiled .exe builds are provided for both CUDA (NVIDIA only) and DirectML (all GPUs).
-
Download
- Pre-built binaries can be downloaded from the Discord server in the pre-releases channel.
-
Works on:
- Any modern GPU (NVIDIA, AMD, Intel, including integrated graphics)
- Windows 10/11 (x64)
- No need for CUDA or special drivers
-
Recommended for:
- GTX 10xx/9xx/7xx series (old NVIDIA)
- Any AMD Radeon or Intel Iris/Xe GPU
- Laptops and office PCs with integrated graphics
-
Works on:
- NVIDIA GPUs GTX 1660, RTX 2000/3000/4000/5000
- Requires: CUDA 13.1, TensorRT-10.14.1.48 (included in build)
- Windows 10/11 (x64)
-
Not supported: GTX 10xx/Pascal and older (TensorRT 10 limitation)
-
Includes both CUDA+TensorRT and DML support (switchable in settings)
Both versions are ready-to-use: just download, unpack, run ai.exe and follow instructions in the overlay.
- Download and unpack your chosen version (see links above).
- For CUDA build, install CUDA 13.1 if not already installed.
- For DML build, no extra software is needed.
- Run
ai.exe. On first launch, the model will be exported (may take up to 5 minutes). - Place your
.onnxmodel in themodelsfolder and select it in the overlay (HOME key). - All settings are available in the overlay. Use the HOME key to open/close overlay.
- Right Mouse Button: Aim at the detected target
- F2: Exit
- F3: Pause aiming
- F4: Reload config
- Home: Open/close overlay and settings
If you want to compile the project yourself or modify code, follow these instructions.
-
Visual Studio 2022 Community (Download)
-
Windows 10 or 11 (x64)
-
Windows SDK 10.0.26100.0 or newer
-
CMake (Download)
-
OpenCV 4.13.0
-
[For CUDA version]
-
[For DML version]
- You can use pre-built OpenCV DLLs (just copy
opencv_world4130.dllto your exe folder)
- You can use pre-built OpenCV DLLs (just copy
-
Other dependencies:
- DML (DirectML):
Select
Release | x64 | DML(works on any modern GPU) - CUDA (TensorRT):
Select
Release | x64 | CUDA(requires supported NVIDIA GPU, see above)
Before building the project, download and place all third-party dependencies in the following directories inside your project structure:
Required folders inside your repository:
sunone_aimbot_cpp/
└── sunone_aimbot_cpp/
└── modules/
Place each dependency as follows:
| Library | Path |
|---|---|
| SimpleIni | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/SimpleIni.h |
| serial | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/serial/ |
| TensorRT | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/TensorRT-10.14.1.48/ |
| GLFW | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/glfw-3.4.bin.WIN64/ |
| OpenCV | sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/ |
-
SimpleIni: Download
SimpleIni.hPlace inmodules/. -
serial: Download the
seriallibrary (whole folder). To build, opensunone_aimbot_cpp/sunone_aimbot_cpp/modules/serial/visual_studio/visual_studio.sln- Set C/C++ > Code Generation > Runtime Library to Multi-threaded (/MT)
- Build in Release x64
- Use the built DLL/LIB with your project.
-
TensorRT: Download TensorRT-10.14.1.48 Place the folder as shown above.
-
GLFW: Download GLFW Windows binaries Place the folder as shown above.
-
OpenCV: Use your custom build or official DLLs (see CUDA/DML notes below). Place DLLs either next to your exe or in
modules/opencv/.
Example structure after setup:
sunone_aimbot_cpp/
└── sunone_aimbot_cpp/
└── modules/
├── SimpleIni.h
├── serial/
├── TensorRT-10.14.1.48/
├── glfw-3.4.bin.WIN64/
└── opencv/
This section is only required if you want to use the CUDA (TensorRT) version and need OpenCV with CUDA support. For DML build, skip this step — you can use the pre-built OpenCV DLL.
Step-by-step instructions:
-
Download Sources
-
Prepare Directories
- Create:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build - Extract
opencv-4.13.0intosunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv-4.13.0 - Extract
opencv_contrib-4.13.0intosunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv_contrib-4.13.0 - install cuDNN
Default install path
C:/Program Files/NVIDIA/CUDNN/v9.17
- Create:
-
Configure with CMake
- Open CMake GUI
- Source code:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv-4.13.0 - Build directory:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build - Click Configure (Choose "Visual Studio 17 2022", x64)
-
Enable CUDA Options
-
After first configure, set the following:
WITH_CUDA= ONWITH_CUBLAS= ONENABLE_FAST_MATH= ONCUDA_FAST_MATH= ONWITH_CUDNN= ONCUDNN_LIBRARY=.../sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/lib/x64/cudnn.libCUDNN_INCLUDE_DIR=.../sunone_aimbot_cpp/sunone_aimbot_cpp/modules/cudnn/includeCUDA_ARCH_BIN= See CUDA Wikipedia for your GPU. Example for RTX 3080-Ti:8.6OPENCV_DNN_CUDA= ONOPENCV_EXTRA_MODULES_PATH=.../sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/opencv_contrib-4.13.0/modulesBUILD_opencv_world= ON
-
Uncheck:
WITH_NVCUVENCWITH_NVCUVID
-
Click Configure again (make sure nothing is reset)
-
Click Generate
-
-
Build in Visual Studio
- Open
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/OpenCV.slnor click "Open Project" in CMake - Set build config: x64 | Release
- Build
ALL_BUILDtarget (can take up to 2 hours) - Then build
INSTALLtarget
- Open
-
Copy Resulting DLLs
- DLLs:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/x64/vc17/bin/ - LIBs:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/x64/vc17/lib/ - Includes:
sunone_aimbot_cpp/sunone_aimbot_cpp/modules/opencv/build/install/include/opencv2 - Copy needed DLLs (
opencv_world4130.dll, etc.) next to your project’s executable.
- DLLs:
-
For CUDA build (TensorRT backend):
- You must build OpenCV with CUDA support (see the guide above).
- Place all built DLLs (e.g.,
opencv_world4130.dll) next to your executable or in themodulesfolder.
-
For DML build (DirectML backend):
- You can use the official pre-built OpenCV DLLs if you only plan to use DirectML.
- If you want to use both CUDA and DML modes in the same executable, you should always use your custom OpenCV build with CUDA enabled (it will work for both modes).
-
Note: If you run the CUDA backend with non-CUDA OpenCV DLLs, the program will not work and may crash due to missing symbols.
- Open the solution in Visual Studio 2022.
- Choose your configuration (
Release | x64 | DMLorRelease | x64 | CUDA). - Build the solution.
- Run
ai.exefrom the output folder.
-
Convert PyTorch
.ptmodels to ONNX:pip install ultralytics -U # TensorRT yolo export model=sunxds_0.5.6.pt format=onnx dynamic=true simplify=true # DML yolo export model=sunxds_0.5.6.pt format=onnx simplify=true
-
To convert
.onnxto.enginefor TensorRT, use the overlay export tab (open overlay with HOME).
This mode receives an MJPEG byte stream over UDP and decodes JPEG frames on the receiver PC.
Receiver (this app)
- Open the overlay and set
capture_method = udp_capture. - Set
udp_ipto the sender PC IP (filter) andudp_portto the listening port (default1234).
Sender (other PC) Send MJPEG over UDP to the receiver. Example using FFmpeg on Windows:
ffmpeg -f gdigrab -framerate 60 -i desktop -vf scale=320:320 -vcodec mjpeg -f mjpeg udp://RECEIVER_IP:1234Notes:
- Use the receiver IP in the command above.
- It is best to match the stream size to your detection resolution (160/320/640).
- Make sure the UDP port is allowed by your firewall on the receiver PC.
- See all configuration options and documentation here: config_cpp.md
- TensorRT Documentation
- OpenCV Documentation
- ImGui
- CppWinRT
- GLFW
- WindMouse
- KMBOX
- MAKCU
- depth-anything-tensorrt
- Python AI Version
- License: Apache License 2.0
- License: MIT License
This project is actively developed thanks to the people who support it on Boosty and Patreon.
By supporting the project, you get access to improved and better-trained AI models!
Need help or want to contribute? Join our Discord server or open an issue on GitHub!