Skip to content

Add minimal capi lifecycle support#7648

Open
bschoenmaeckers wants to merge 9 commits intoRustPython:mainfrom
bschoenmaeckers:capi-lifecycle
Open

Add minimal capi lifecycle support#7648
bschoenmaeckers wants to merge 9 commits intoRustPython:mainfrom
bschoenmaeckers:capi-lifecycle

Conversation

@bschoenmaeckers
Copy link
Copy Markdown
Contributor

@bschoenmaeckers bschoenmaeckers commented Apr 21, 2026

This add the most basic c api I can think off. This way we can iterate over how we want to handle the interpreter initialization api's. After this, the road is clear to implement most of the ABI3 api's one-by-one.

This also include my basic pyo3 test harness that exposes the c api's in a safe and battle tested way. To run the tests use cargo test -p rustpython-capi.

References:

xref #7562

Summary by CodeRabbit

  • New Features

    • Adds a C-API compatibility layer to build RustPython as a Python extension, exposing initialization, GIL/thread-state handling, and refcount operations.
  • Chores

    • Adds build/configuration to support extension-module builds and a new optional "capi" feature to enable the extension.
  • Behavioral Changes

    • Fork/after-fork reinitialization is now gated by an additional "host_env" feature; when extension support is enabled, runtime execution may use a separate VM thread.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 21, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new optional rustpython-capi crate exposing a minimal CPython C-API surface (lifecycle, GIL/thread state, refcount), configures Pyo3 build env, wires a workspace capi feature, and tightens multiple fork/thread-related #[cfg(...)] guards to require host_env.

Changes

Cohort / File(s) Summary
CAPI build & manifest
crates/capi/.cargo/config.toml, crates/capi/Cargo.toml, crates/capi/pyo3-rustpython.config
New crate manifest and Pyo3 config; sets PYO3_CONFIG_FILE and PYO3_NO_PYTHON, configures abi3/shared CPython extension target.
CAPI sources
crates/capi/src/lib.rs, crates/capi/src/pylifecycle.rs, crates/capi/src/pystate.rs, crates/capi/src/refcount.rs
New crate root re-exports and modules implementing C ABI: init/finalize (Py_Initialize*, Py_IsInitialized, Py_Finalize*), thread/GIL state (PyGILState_Ensure/Release, PyEval_SaveThread, PyThreadState), refcount (_Py_IncRef, _Py_DecRef), MAIN_INTERP singleton and thread-local VM attachment.
Workspace wiring
Cargo.toml
Adds optional workspace rustpython-capi dependency and capi feature that enables threading.
Top-level runtime integration
build.rs, src/lib.rs
build.rs emits platform linker/export flags only when capi is enabled; run path under capi stores main interpreter into the C API slot and runs via a thread VM, then finalizes via main interpreter slot.
Fork/thread cfg tightening
crates/vm/src/codecs.rs, crates/vm/src/intern.rs, crates/vm/src/object/core.rs, crates/vm/src/stdlib/_imp.rs, crates/vm/src/stdlib/_io.rs, crates/vm/src/stdlib/_thread.rs
Multiple after-fork and lock reinit symbols had cfg guards tightened to require feature = "host_env" in addition to existing unix and/or threading.
Minor edits & tests
crates/vm/src/vm/thread.rs, .cspell.dict/cpython.txt
Small binding rename in thread cleanup; added pystate to cspell; unit test for cross-thread VM attachment in pystate.rs.

Sequence Diagram(s)

sequenceDiagram
    participant C as C client
    participant T as Thread (caller)
    participant M as MAIN_INTERP
    participant I as Interpreter
    participant V as ThreadedVirtualMachine

    C->>T: Py_Initialize()
    T->>M: store Interpreter if empty
    M->>I: construct/interact with Interpreter
    T->>M: request_vm_from_interpreter()
    M->>I: call new_thread() / obtain ThreadedVirtualMachine
    I-->>V: return ThreadedVirtualMachine
    T->>T: attach_vm_to_thread() (thread-local)
    C->>T: PyGILState_Ensure / refcount ops use V
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • youknowone
  • coolreader18

Poem

🐇 I built a tiny C-API gate,
Locked the main interp to keep it straight.
Threads hop in, each finds a VM,
Inc and Dec, then let it stem—
Hooray, the rabbit shipped a crate! 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add minimal capi lifecycle support' accurately and specifically describes the main change: introducing a minimal C API implementation with lifecycle functionality (Py_Initialize, Py_Finalize, etc.) in the new crates/capi crate.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/capi/src/pylifecycle.rs`:
- Around line 31-35: The current Py_InitializeEx function panics when
INITIALIZED.is_completed() is true, but per the CPython C API it must be a no-op
on repeated calls; modify Py_InitializeEx (and any Py_Initialize wrapper) to
return immediately when INITIALIZED.is_completed() is true instead of panicking
so repeated FFI calls do not abort the process, preserving existing
initialization semantics and side-effects only on the first successful
initialization.
- Around line 63-65: Py_FinalizeEx currently returns success but doesn't update
initialization state because the code uses a non-resettable Once, so
Py_IsInitialized() stays true; replace the Once-based mechanism with a
resettable atomic state (e.g., an AtomicBool named INITIALIZED) and update all
lifecycle functions: set INITIALIZED.store(true, Ordering::SeqCst) in
Py_InitializeEx (and any init paths), set INITIALIZED.store(false,
Ordering::SeqCst) in Py_FinalizeEx before returning, and have Py_IsInitialized()
read INITIALIZED.load(Ordering::SeqCst); ensure thread-safe Ordering and update
or remove the Once symbol usage so no code still relies on the old Once
primitive.

In `@crates/capi/src/pystate.rs`:
- Around line 36-39: The exported PyEval_SaveThread currently returns null which
breaks the C API contract and will corrupt callers using
Py_BEGIN_ALLOW_THREADS/PyEval_RestoreThread; either implement proper
SaveThread/RestoreThread semantics or stop exporting it. To fix: in
PyEval_SaveThread implement logic that captures the current thread's
PyThreadState (returning a non-null pointer token representing the saved state)
and ensure PyEval_RestoreThread accepts that token to restore thread state,
using the existing PyThreadState type and thread-local VM utilities in
pystate.rs; if you cannot implement correct semantics now, remove the
#[no_mangle]/pub extern "C" export for PyEval_SaveThread (and any paired
PyEval_RestoreThread export) so the unstable API is not exposed. Ensure the
chosen change keeps the stable ABI consistent with callers expecting a valid
non-null PyThreadState pointer.
- Around line 26-34: PyGILState_Ensure currently always returns 0 and
PyGILState_Release is a no-op; implement proper per-thread recursion/state
tracking so each PyGILState_Ensure returns a handle encoding the prior state (or
push the prior state on a per-thread stack in the same thread-local VM storage)
and PyGILState_Release(_state) pops/reads that handle to restore the exact
previous state (including detaching the VM if the matching Ensure attached it);
update the thread-local VM storage to track recursion depth/stack and use those
symbols (PyGILState_Ensure, PyGILState_Release, the existing thread-local VM
storage) when implementing the push/encode-on-Ensure and restore/pop-on-Release
behavior.

In `@crates/capi/src/refcount.rs`:
- Around line 5-17: The exported refcount functions currently accept raw
pointers but are declared callable from safe Rust; change both exports to be
unsafe extern "C" functions and restore the normal no_mangle attribute (replace
#[unsafe(no_mangle)] with #[no_mangle]) so the non-null/valid-pointer contract
is encoded in the signature; update _Py_DecRef and _Py_IncRef signatures to pub
unsafe extern "C" fn _Py_DecRef(op: *mut PyObject) and pub unsafe extern "C" fn
_Py_IncRef(op: *mut PyObject), remove the clippy allow on
not_unsafe_ptr_arg_deref, and adjust the bodies to use NonNull/new or unwrap
patterns consistent with other FFI functions so dereferencing/reconstructing
ownership is only allowed in unsafe code paths.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: c1193732-a759-4d95-962e-bf28e2d74df2

📥 Commits

Reviewing files that changed from the base of the PR and between d5921d1 and 0a01cb4.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • crates/capi/.cargo/config.toml
  • crates/capi/Cargo.toml
  • crates/capi/pyo3-rustpython.config
  • crates/capi/src/lib.rs
  • crates/capi/src/pylifecycle.rs
  • crates/capi/src/pystate.rs
  • crates/capi/src/refcount.rs

Comment thread crates/capi/src/pylifecycle.rs
Comment thread crates/capi/src/pylifecycle.rs
Comment thread crates/capi/src/pystate.rs
Comment thread crates/capi/src/pystate.rs
Comment thread crates/capi/src/refcount.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
crates/capi/src/refcount.rs (1)

5-17: ⚠️ Potential issue | 🟠 Major

Encode the raw-pointer contract in the function signatures.

These exports still dereference/reconstruct ownership from op while being callable from safe Rust. Make both functions pub unsafe extern "C" fn, add # Safety docs, and remove the clippy::not_unsafe_ptr_arg_deref suppressions; keep the C ABI symbol unchanged.

Proposed fix
 #[unsafe(no_mangle)]
-#[allow(clippy::not_unsafe_ptr_arg_deref)]
-pub extern "C" fn _Py_DecRef(op: *mut PyObject) {
+/// # Safety
+///
+/// `op` must be a valid, non-null pointer to a live `PyObject` with an owned reference.
+pub unsafe extern "C" fn _Py_DecRef(op: *mut PyObject) {
     // By dropping PyObjectRef, we will decrement the reference count.
     unsafe { PyObjectRef::from_raw(NonNull::new_unchecked(op)) };
 }
 
 #[unsafe(no_mangle)]
-#[allow(clippy::not_unsafe_ptr_arg_deref)]
-pub extern "C" fn _Py_IncRef(op: *mut PyObject) {
+/// # Safety
+///
+/// `op` must be a valid, non-null pointer to a live `PyObject`.
+pub unsafe extern "C" fn _Py_IncRef(op: *mut PyObject) {
     // Don't drop the owned value, as we just want to increment the refcount.
     core::mem::forget(unsafe { (*op).to_owned() });
 }

Run this read-only check to verify the signatures and lint suppression are removed:

#!/bin/bash
# Description: Verify C-ABI refcount exports encode unsafe raw-pointer contracts.

rg -n -C3 '(_Py_DecRef|_Py_IncRef|clippy::not_unsafe_ptr_arg_deref|pub\s+extern\s+"C"\s+fn\s+_Py_)' crates/capi/src/refcount.rs

As per coding guidelines, **/*.rs should follow Rust best practices for error handling and memory management.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/capi/src/refcount.rs` around lines 5 - 17, Change both exports to
encode the raw-pointer contract by making the signatures pub unsafe extern "C"
fn _Py_DecRef(op: *mut PyObject) and pub unsafe extern "C" fn _Py_IncRef(op:
*mut PyObject), remove the #[allow(clippy::not_unsafe_ptr_arg_deref)]
attributes, and keep the no-mangle C ABI symbol. Add concise # Safety doc
comments to each function stating the caller must pass a non-null, properly
aligned, valid PyObject pointer and that ownership/refcount semantics apply to
operations using PyObjectRef::from_raw and (*op).to_owned(). Ensure the body
logic (using PyObjectRef::from_raw in _Py_DecRef and
core::mem::forget((*op).to_owned()) in _Py_IncRef) remains the same but is now
behind an unsafe function signature.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/capi/Cargo.toml`:
- Line 16: Update the dependency declaration for rustpython-stdlib to be
optional and feature-gated: change the rustpython-stdlib entry so it includes
optional = true and preserves features = ["threading"] (i.e., make
rustpython-stdlib optional), and verify the existing stdlib feature still lists
"rustpython-stdlib" so enabling the stdlib feature pulls in the optional
dependency; adjust the dependency line accordingly and keep the stdlib feature
target unchanged.

---

Duplicate comments:
In `@crates/capi/src/refcount.rs`:
- Around line 5-17: Change both exports to encode the raw-pointer contract by
making the signatures pub unsafe extern "C" fn _Py_DecRef(op: *mut PyObject) and
pub unsafe extern "C" fn _Py_IncRef(op: *mut PyObject), remove the
#[allow(clippy::not_unsafe_ptr_arg_deref)] attributes, and keep the no-mangle C
ABI symbol. Add concise # Safety doc comments to each function stating the
caller must pass a non-null, properly aligned, valid PyObject pointer and that
ownership/refcount semantics apply to operations using PyObjectRef::from_raw and
(*op).to_owned(). Ensure the body logic (using PyObjectRef::from_raw in
_Py_DecRef and core::mem::forget((*op).to_owned()) in _Py_IncRef) remains the
same but is now behind an unsafe function signature.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 5fa614e4-9121-4b67-abb1-14a0faa61559

📥 Commits

Reviewing files that changed from the base of the PR and between 0a01cb4 and 2b4b20f.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • .cspell.dict/cpython.txt
  • crates/capi/.cargo/config.toml
  • crates/capi/Cargo.toml
  • crates/capi/pyo3-rustpython.config
  • crates/capi/src/lib.rs
  • crates/capi/src/pylifecycle.rs
  • crates/capi/src/pystate.rs
  • crates/capi/src/refcount.rs
✅ Files skipped from review due to trivial changes (4)
  • .cspell.dict/cpython.txt
  • crates/capi/pyo3-rustpython.config
  • crates/capi/.cargo/config.toml
  • crates/capi/src/lib.rs
🚧 Files skipped from review as they are similar to previous changes (2)
  • crates/capi/src/pystate.rs
  • crates/capi/src/pylifecycle.rs

Comment thread crates/capi/Cargo.toml Outdated
@bschoenmaeckers
Copy link
Copy Markdown
Contributor Author

Please feel free to push to this branch.

@bschoenmaeckers bschoenmaeckers marked this pull request as draft April 21, 2026 13:13
Copy link
Copy Markdown
Member

@youknowone youknowone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea to split to small patches 👍

implementation=CPython
version=3.14
shared=true
abi3=true
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any freethreading marker? we only target free-treading, which represents abi3t

Copy link
Copy Markdown
Contributor Author

@bschoenmaeckers bschoenmaeckers Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

abi3t support is not yet implemented in PyO3. But for now I make sure not to use incompatible api's. For progress see PyO3/pyo3#5786

@bschoenmaeckers
Copy link
Copy Markdown
Contributor Author

@youknowone Can you help me fix clippy? The rustpython-capi crate needs threading support to be enabled on the rustpython-stdlib. But cargo-shear does not want unused dependencies.

@bschoenmaeckers bschoenmaeckers marked this pull request as ready for review April 21, 2026 15:21
@bschoenmaeckers
Copy link
Copy Markdown
Contributor Author

@youknowone Can you help me fix clippy? The rustpython-capi crate needs threading support to be enabled on the rustpython-stdlib. But cargo-shear does not want unused dependencies.

Got it to work.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/vm/src/stdlib/_thread.rs (1)

2-9: ⚠️ Potential issue | 🟠 Major

Feature gate mismatch: strictest re-export and definition gates are incompatible with call-site gates.

Several fork-reinit functions are defined with strict #[cfg(all(unix, feature = "threading", feature = "host_env"))] gates but called under only #[cfg(feature = "threading")]:

  • after_fork_child (re-exported at line 2, called at posix.rs:798)
  • reset_weakref_locks_after_fork (defined at object/core.rs:497, called at posix.rs:793)
  • string_pool.reinit_after_fork (defined at intern.rs:40, called at posix.rs:842)
  • codec_registry.reinit_after_fork (defined at codecs.rs:162, called at posix.rs:845)

A build with threading enabled but host_env disabled on Unix will fail compilation because these call sites will resolve to functions that don't exist. Either (a) add feature = "host_env" to each call-site #[cfg] in posix.rs, or (b) relax the definition gates to require only all(unix, feature = "threading") (matching the callers).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/stdlib/_thread.rs` around lines 2 - 9, The build fails because
some fork-reinit functions are defined with #[cfg(all(unix, feature =
"threading", feature = "host_env"))] but are called under only #[cfg(feature =
"threading")]; update the cfgs so callers and definitions match: relax the
definition gates for after_fork_child, reset_weakref_locks_after_fork, the
string_pool.reinit_after_fork method, and codec_registry.reinit_after_fork to
use #[cfg(all(unix, feature = "threading"))] (i.e., remove the feature =
"host_env" requirement) so the symbols exist when threading is enabled, or
alternatively add feature = "host_env" to the callers' cfgs in posix.rs—pick one
consistent approach and apply it to all listed symbols.
♻️ Duplicate comments (1)
crates/capi/src/pylifecycle.rs (1)

7-7: ⚠️ Potential issue | 🟠 Major

Track logical finalization separately from the OnceLock.

Py_FinalizeEx() reports success but leaves MAIN_INTERP populated, so Py_IsInitialized() remains true forever after the first Py_InitializeEx(). If full teardown is deferred, at least keep a resettable lifecycle flag so callers can observe finalize → uninitialized → initialize correctly.

Minimal state-tracking direction
 use crate::pystate::attach_vm_to_thread;
 use core::ffi::c_int;
 use rustpython_vm::Interpreter;
 use rustpython_vm::vm::thread::ThreadedVirtualMachine;
-use std::sync::{Mutex, OnceLock};
+use std::sync::{
+    atomic::{AtomicBool, Ordering},
+    Mutex, OnceLock,
+};

 static MAIN_INTERP: OnceLock<Mutex<Interpreter>> = OnceLock::new();
+static INITIALIZED: AtomicBool = AtomicBool::new(false);
@@
 pub extern "C" fn Py_IsInitialized() -> c_int {
-    MAIN_INTERP.get().is_some() as c_int
+    INITIALIZED.load(Ordering::SeqCst) as c_int
 }
@@
 pub extern "C" fn Py_InitializeEx(_initsigs: c_int) {
     MAIN_INTERP.get_or_init(|| Interpreter::with_init(Default::default(), |_vm| {}).into());
+    INITIALIZED.store(true, Ordering::SeqCst);

     attach_vm_to_thread();
 }
@@
 pub extern "C" fn Py_FinalizeEx() -> c_int {
+    INITIALIZED.store(false, Ordering::SeqCst);
     0
 }

Also applies to: 20-21, 42-43

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/capi/src/pylifecycle.rs` at line 7, MAIN_INTERP (a
OnceLock<Mutex<Interpreter>>) is never cleared so Py_IsInitialized() stays true
after Py_FinalizeEx(); introduce a separate resettable lifecycle flag (e.g., an
AtomicBool like PY_INITIALIZED) and update it to true in Py_InitializeEx and
false in Py_FinalizeEx, then have Py_IsInitialized read that flag instead of the
OnceLock; keep MAIN_INTERP as-is (do not rely on dropping it) so callers see a
correct initialized/finalized lifecycle without changing the OnceLock usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/vm/src/stdlib/_imp.rs`:
- Around line 79-96: Update the call sites that invoke the fork helper functions
so their cfg gating matches the exports: wrap the calls to
acquire_imp_lock_for_fork(), after_fork_child_imp_lock_release(),
reinit_imp_lock_after_fork(), and release_imp_lock_after_fork_parent() with
cfg(all(unix, feature = "threading", feature = "host_env")) so they are only
compiled when those functions are exported; locate the call sites where those
four symbols are used and replace the existing cfg checks (e.g., feature =
"threading" or all(unix, feature = "threading")) with the new all(unix, feature
= "threading", feature = "host_env") gate.

In `@crates/vm/src/stdlib/_io.rs`:
- Around line 5-6: The call-site for reinit_std_streams_after_fork in posix.rs
is only guarded by #[cfg(feature = "threading")] but the symbol is exported in
_io.rs under #[cfg(all(unix, feature = "threading", feature = "host_env"))];
change the cfg on the posix.rs invocation to #[cfg(all(unix, feature =
"threading", feature = "host_env"))] so the call is only compiled when the
function is available (search for the reinit_std_streams_after_fork call in
posix.rs to update its cfg).

---

Outside diff comments:
In `@crates/vm/src/stdlib/_thread.rs`:
- Around line 2-9: The build fails because some fork-reinit functions are
defined with #[cfg(all(unix, feature = "threading", feature = "host_env"))] but
are called under only #[cfg(feature = "threading")]; update the cfgs so callers
and definitions match: relax the definition gates for after_fork_child,
reset_weakref_locks_after_fork, the string_pool.reinit_after_fork method, and
codec_registry.reinit_after_fork to use #[cfg(all(unix, feature = "threading"))]
(i.e., remove the feature = "host_env" requirement) so the symbols exist when
threading is enabled, or alternatively add feature = "host_env" to the callers'
cfgs in posix.rs—pick one consistent approach and apply it to all listed
symbols.

---

Duplicate comments:
In `@crates/capi/src/pylifecycle.rs`:
- Line 7: MAIN_INTERP (a OnceLock<Mutex<Interpreter>>) is never cleared so
Py_IsInitialized() stays true after Py_FinalizeEx(); introduce a separate
resettable lifecycle flag (e.g., an AtomicBool like PY_INITIALIZED) and update
it to true in Py_InitializeEx and false in Py_FinalizeEx, then have
Py_IsInitialized read that flag instead of the OnceLock; keep MAIN_INTERP as-is
(do not rely on dropping it) so callers see a correct initialized/finalized
lifecycle without changing the OnceLock usage.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: c5ea4489-fdf4-4945-bd15-c2b5f021848c

📥 Commits

Reviewing files that changed from the base of the PR and between 2b4b20f and ce47be6.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (15)
  • .cspell.dict/cpython.txt
  • crates/capi/.cargo/config.toml
  • crates/capi/Cargo.toml
  • crates/capi/pyo3-rustpython.config
  • crates/capi/src/lib.rs
  • crates/capi/src/pylifecycle.rs
  • crates/capi/src/pystate.rs
  • crates/capi/src/refcount.rs
  • crates/vm/src/codecs.rs
  • crates/vm/src/intern.rs
  • crates/vm/src/object/core.rs
  • crates/vm/src/stdlib/_imp.rs
  • crates/vm/src/stdlib/_io.rs
  • crates/vm/src/stdlib/_thread.rs
  • crates/vm/src/vm/thread.rs
✅ Files skipped from review due to trivial changes (6)
  • crates/vm/src/vm/thread.rs
  • crates/capi/.cargo/config.toml
  • crates/capi/pyo3-rustpython.config
  • crates/capi/src/lib.rs
  • crates/capi/src/refcount.rs
  • crates/capi/src/pystate.rs
🚧 Files skipped from review as they are similar to previous changes (2)
  • .cspell.dict/cpython.txt
  • crates/capi/Cargo.toml

Comment on lines +79 to 96
#[cfg(all(unix, feature = "threading", feature = "host_env"))]
pub(crate) fn acquire_imp_lock_for_fork() {
lock::acquire_lock_for_fork();
}

#[cfg(feature = "threading")]
#[cfg(all(unix, feature = "threading", feature = "host_env"))]
pub(crate) fn release_imp_lock_after_fork_parent() {
lock::release_lock_after_fork_parent();
}

#[cfg(all(unix, feature = "threading"))]
#[cfg(all(unix, feature = "threading", feature = "host_env"))]
pub(crate) unsafe fn reinit_imp_lock_after_fork() {
unsafe { lock::reinit_after_fork() }
}

#[cfg(all(unix, feature = "threading"))]
#[cfg(all(unix, feature = "threading", feature = "host_env"))]
pub(crate) unsafe fn after_fork_child_imp_lock_release() {
unsafe { lock::after_fork_child_reinit_and_release() }
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Inspect import-lock after-fork helper call sites and nearby cfg guards.
rg -n -C4 'acquire_imp_lock_for_fork|release_imp_lock_after_fork_parent|reinit_imp_lock_after_fork|after_fork_child_imp_lock_release' --type rust

Repository: RustPython/RustPython

Length of output: 4130


🏁 Script executed:

#!/bin/bash
# Check the full context around line 851 in posix.rs to see the cfg guard
sed -n '840,860p' crates/vm/src/stdlib/posix.rs | cat -n

Repository: RustPython/RustPython

Length of output: 909


🏁 Script executed:

#!/bin/bash
# Check if _imp.rs has conditional no-op exports for when host_env is unavailable
sed -n '79,110p' crates/vm/src/stdlib/_imp.rs | cat -n

Repository: RustPython/RustPython

Length of output: 1301


🏁 Script executed:

#!/bin/bash
# Find the function containing line 851 in posix.rs to check parent cfg guard
sed -n '830,855p' crates/vm/src/stdlib/posix.rs | cat -n

Repository: RustPython/RustPython

Length of output: 1315


🏁 Script executed:

#!/bin/bash
# Also check if there are any conditional stubs or no-op exports for when host_env is not available
rg -A5 'cfg.*host_env' crates/vm/src/stdlib/_imp.rs

Repository: RustPython/RustPython

Length of output: 1583


🏁 Script executed:

#!/bin/bash
# Find the function definition that contains line 851
sed -n '800,860p' crates/vm/src/stdlib/posix.rs | grep -n "fn " | head -5

Repository: RustPython/RustPython

Length of output: 178


🏁 Script executed:

#!/bin/bash
# Get full function signature and cfg around line 851
sed -n '795,860p' crates/vm/src/stdlib/posix.rs | cat -n

Repository: RustPython/RustPython

Length of output: 3390


🏁 Script executed:

#!/bin/bash
# Check if host_env is a conditional feature that can be disabled
rg -n "feature.*host_env|host_env.*feature" crates/vm/Cargo.toml crates/Cargo.toml

Repository: RustPython/RustPython

Length of output: 122


🏁 Script executed:

#!/bin/bash
# Check the full context of call sites to see exact cfg guards
sed -n '760,770p' crates/vm/src/stdlib/posix.rs | cat -n
sed -n '800,810p' crates/vm/src/stdlib/posix.rs | cat -n  
sed -n '855,865p' crates/vm/src/stdlib/posix.rs | cat -n

Repository: RustPython/RustPython

Length of output: 1543


🏁 Script executed:

#!/bin/bash
# Check for host_env feature definition in the workspace
find . -name "Cargo.toml" -type f | xargs grep -l "host_env" | head -5

Repository: RustPython/RustPython

Length of output: 156


🏁 Script executed:

#!/bin/bash
# Check the root Cargo.toml for feature definitions
sed -n '/\[features\]/,/^\[/p' Cargo.toml | head -30

Repository: RustPython/RustPython

Length of output: 963


🏁 Script executed:

#!/bin/bash
# Check if there are any conditional no-op stubs for the fork helpers when host_env is unavailable
rg -B5 -A5 'cfg.*not.*host_env' crates/vm/src/stdlib/_imp.rs

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Confirm the exact cfg guard on the reinit_locks_after_fork function
sed -n '818,835p' crates/vm/src/stdlib/posix.rs | cat -n

Repository: RustPython/RustPython

Length of output: 1086


🏁 Script executed:

#!/bin/bash
# Check if there are any conditional stubs anywhere for these functions when host_env is unavailable
rg 'acquire_imp_lock_for_fork|release_imp_lock_after_fork_parent|reinit_imp_lock_after_fork|after_fork_child_imp_lock_release' crates/vm/src/stdlib/_imp.rs

Repository: RustPython/RustPython

Length of output: 272


🏁 Script executed:

#!/bin/bash
# Verify: when host_env feature is off, are these symbols completely unavailable?
# Check what happens after line 97 in _imp.rs (after the host_env gated functions)
sed -n '97,120p' crates/vm/src/stdlib/_imp.rs | cat -n

Repository: RustPython/RustPython

Length of output: 1005


Align call sites with the new export cfg gates to prevent linker errors.

The four fork helpers in _imp.rs (lines 79–96) are now gated with all(unix, feature = "threading", feature = "host_env"), but their call sites in posix.rs only check feature = "threading" or all(unix, feature = "threading"). When host_env is disabled, these functions will not be exported, causing linker failures.

Update the four call sites in posix.rs to require all(unix, feature = "threading", feature = "host_env"):

  • Line 763: acquire_imp_lock_for_fork()
  • Line 804: after_fork_child_imp_lock_release()
  • Line 851: reinit_imp_lock_after_fork() (within reinit_locks_after_fork function)
  • Line 860: release_imp_lock_after_fork_parent()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/stdlib/_imp.rs` around lines 79 - 96, Update the call sites
that invoke the fork helper functions so their cfg gating matches the exports:
wrap the calls to acquire_imp_lock_for_fork(),
after_fork_child_imp_lock_release(), reinit_imp_lock_after_fork(), and
release_imp_lock_after_fork_parent() with cfg(all(unix, feature = "threading",
feature = "host_env")) so they are only compiled when those functions are
exported; locate the call sites where those four symbols are used and replace
the existing cfg checks (e.g., feature = "threading" or all(unix, feature =
"threading")) with the new all(unix, feature = "threading", feature =
"host_env") gate.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

posix.rs already requires unix so that clause is redundant right?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the top-level cfg gate on posix.rs or any module-level attribute
head -20 crates/vm/src/stdlib/posix.rs

# Check the cfg guards on the exact call sites
sed -n '758,770p' crates/vm/src/stdlib/posix.rs | cat -n
sed -n '800,812p' crates/vm/src/stdlib/posix.rs | cat -n
sed -n '848,865p' crates/vm/src/stdlib/posix.rs | cat -n

Length of output: 2403


@bschoenmaeckers — you're right, within posix.rs the unix clause is redundant since the module is inherently Unix-only. My apologies for the noise there.

That said, the core concern still stands: the four call sites currently only gate on feature = "threading", but the exported functions in _imp.rs now additionally require feature = "host_env". When threading is enabled but host_env is not, the call sites will reference symbols that don't exist, causing a compile error.

The fix is straightforward — just add feature = "host_env" to the existing guards in posix.rs (no need to add unix):

// Line ~763
#[cfg(all(feature = "threading", feature = "host_env"))]
crate::stdlib::_imp::acquire_imp_lock_for_fork();

// Line ~804
#[cfg(all(feature = "threading", feature = "host_env"))]
unsafe {
    crate::stdlib::_imp::after_fork_child_imp_lock_release()
};

// Line ~851 (inside the reinit_locks_after_fork unsafe block)
#[cfg(all(feature = "threading", feature = "host_env"))]
crate::stdlib::_imp::reinit_imp_lock_after_fork();

// Line ~860
#[cfg(all(feature = "threading", feature = "host_env"))]
crate::stdlib::_imp::release_imp_lock_after_fork_parent();

🧠 Learnings used
Learnt from: joshuamegnauth54
Repo: RustPython/RustPython PR: 7646
File: crates/vm/src/stdlib/_thread.rs:0-0
Timestamp: 2026-04-21T18:46:19.577Z
Learning: In RustPython's `crates/vm/src/stdlib/_thread.rs`, the `thread_to_id` function on Unix casts `handle.as_pthread_t()` (which returns `RawPthread`, a platform-dependent alias) using `as _` rather than `as u64`. This is intentional: on Linux `RawPthread` is already `u64`, so `as u64` triggers Clippy's `trivial_numeric_casts` lint; `as _` lets the compiler infer the target type from the `u64` return type, satisfying portability needs on macOS/BSD without suppressing Clippy.

Learnt from: arihant2math
Repo: RustPython/RustPython PR: 5790
File: build.rs:2-2
Timestamp: 2025-06-28T16:31:03.991Z
Learning: In Cargo build scripts (build.rs), the environment variable CARGO_CFG_TARGET_OS is guaranteed to exist and is automatically set by Cargo during the build process, making unwrap() safe to use when accessing this variable.

Learnt from: youknowone
Repo: RustPython/RustPython PR: 7333
File: crates/vm/src/gc_state.rs:833-842
Timestamp: 2026-03-04T11:07:39.411Z
Learning: In RustPython's `crates/vm/src/gc_state.rs`, the `gc_state()` function intentionally uses `rustpython_common::static_cell!` which becomes thread-local when the "threading" feature is disabled. This is correct by design: in non-threading mode, `PyRwLock`/`PyMutex` use `Cell`-based locks that are `!Sync` (preventing a process-wide global `GcState` from compiling), and reference counting uses `Rc` instead of `Arc`, so Python objects cannot cross OS thread boundaries. Each thread therefore correctly owns and GC-tracks only its own objects. Do not flag this as a bug.

Learnt from: bschoenmaeckers
Repo: RustPython/RustPython PR: 7648
File: crates/capi/src/pystate.rs:36-39
Timestamp: 2026-04-21T13:48:35.041Z
Learning: In RustPython's `crates/capi/src/pystate.rs`, `PyEval_SaveThread` intentionally returns `null` because RustPython has no GIL; `Py_BEGIN_ALLOW_THREADS`/`Py_END_ALLOW_THREADS` are no-ops by design. Do not flag the null return as a contract violation. Note: `PyEval_RestoreThread` still needs a corresponding no-op stub export to satisfy callers using the full macro pair.

Learnt from: moreal
Repo: RustPython/RustPython PR: 5847
File: vm/src/stdlib/stat.rs:547-567
Timestamp: 2025-06-27T14:47:28.810Z
Learning: In RustPython's stat module implementation, platform-specific constants like SF_SUPPORTED and SF_SYNTHETIC should be conditionally declared only for the platforms where they're available (e.g., macOS), following CPython's approach of optional declaration using `#ifdef` checks rather than providing fallback values for other platforms.

Learnt from: ShaharNaveh
Repo: RustPython/RustPython PR: 7181
File: crates/stdlib/src/csv.rs:20-20
Timestamp: 2026-02-17T07:54:36.200Z
Learning: In RustPython, ensure that modules using Python sequence-like types bring in extension methods by importing the trait (for example, 'use rustpython_vm::sliceable::SliceableSequenceOp;'). Even if the trait name isn't directly referenced in code, its methods (such as len()) become in-scope on types like PyStr. Add this import in modules that rely on those extension methods to avoid compilation errors.

Comment on lines +5 to 6
#[cfg(all(unix, feature = "threading", feature = "host_env"))]
pub(crate) use _io::reinit_std_streams_after_fork;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Inspect after-fork stdio reinit call sites and their cfg guards.
rg -n -C4 'reinit_std_streams_after_fork' --type rust

Repository: RustPython/RustPython

Length of output: 2102


Sync the call-site cfg in posix.rs with the tightened export cfg in _io.rs.

reinit_std_streams_after_fork is only compiled for all(unix, feature = "threading", feature = "host_env"), but the call at crates/vm/src/stdlib/posix.rs:784 is guarded by only #[cfg(feature = "threading")]. Builds with threading enabled and host_env disabled will fail to resolve this symbol.

Update the cfg on line 782 to match the function's availability:

Fix in crates/vm/src/stdlib/posix.rs
-    #[cfg(feature = "threading")]
+    #[cfg(all(unix, feature = "threading", feature = "host_env"))]
     unsafe {
         crate::stdlib::_io::reinit_std_streams_after_fork(vm)
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/stdlib/_io.rs` around lines 5 - 6, The call-site for
reinit_std_streams_after_fork in posix.rs is only guarded by #[cfg(feature =
"threading")] but the symbol is exported in _io.rs under #[cfg(all(unix, feature
= "threading", feature = "host_env"))]; change the cfg on the posix.rs
invocation to #[cfg(all(unix, feature = "threading", feature = "host_env"))] so
the call is only compiled when the function is available (search for the
reinit_std_streams_after_fork call in posix.rs to update its cfg).

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
crates/capi/src/pylifecycle.rs (1)

43-44: ⚠️ Potential issue | 🟠 Major

Finalize should clear lifecycle state.

Py_FinalizeEx() still returns success without clearing MAIN_INTERP, so Py_IsInitialized() cannot become false after finalization and reinitialization cannot work correctly. This matches the previously flagged lifecycle-state issue and should be fixed together with the state tracking change. The Python lifecycle docs specify that Py_IsInitialized() returns false after Py_FinalizeEx(): https://docs.python.org/3/c-api/interp-lifecycle.html#c.Py_FinalizeEx

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/capi/src/pylifecycle.rs` around lines 43 - 44, Py_FinalizeEx currently
returns success without clearing the global interpreter state; update
Py_FinalizeEx() to clear the lifecycle state by resetting MAIN_INTERP (the
global tracking the main interpreter) to a null/empty state so that
Py_IsInitialized() returns false after finalization, and ensure any associated
cleanup (e.g., freeing interpreter resources or flags referenced by MAIN_INTERP)
runs before returning; locate the function Py_FinalizeEx() and the MAIN_INTERP
symbol to implement the state reset and cleanup so reinitialization works
correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/capi/src/pylifecycle.rs`:
- Around line 29-34: Py_InitializeEx currently holds the MAIN_INTERP mutex
across attach_vm_to_thread(), which can re-lock MAIN_INTERP via
request_vm_from_interpreter() and deadlock; instead, construct the new
Interpreter (using Interpreter::with_init(...)), store it into MAIN_INTERP and
drop the MAIN_INTERP lock before calling attach_vm_to_thread(); specifically, in
Py_InitializeEx use get_main_interpreter()/MAIN_INTERP only to check/set the
interpreter and ensure the mutex guard is released prior to invoking
attach_vm_to_thread()/request_vm_from_interpreter().
- Around line 19-21: The Py_IsInitialized implementation currently returns the
inverse of the Python C-API contract; update the function Py_IsInitialized to
return get_main_interpreter().is_some() as c_int (instead of is_none()) so it
returns non-zero when an interpreter exists; ensure the change is made in the
Py_IsInitialized function that calls get_main_interpreter().

---

Duplicate comments:
In `@crates/capi/src/pylifecycle.rs`:
- Around line 43-44: Py_FinalizeEx currently returns success without clearing
the global interpreter state; update Py_FinalizeEx() to clear the lifecycle
state by resetting MAIN_INTERP (the global tracking the main interpreter) to a
null/empty state so that Py_IsInitialized() returns false after finalization,
and ensure any associated cleanup (e.g., freeing interpreter resources or flags
referenced by MAIN_INTERP) runs before returning; locate the function
Py_FinalizeEx() and the MAIN_INTERP symbol to implement the state reset and
cleanup so reinitialization works correctly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: b3bd9f59-e5ed-4610-9f85-69c32dfcab49

📥 Commits

Reviewing files that changed from the base of the PR and between ce47be6 and 9d0000c.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • Cargo.toml
  • build.rs
  • crates/capi/Cargo.toml
  • crates/capi/src/lib.rs
  • crates/capi/src/pylifecycle.rs
  • src/lib.rs
✅ Files skipped from review due to trivial changes (1)
  • crates/capi/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (1)
  • crates/capi/src/lib.rs

Comment thread crates/capi/src/pylifecycle.rs
Comment thread crates/capi/src/pylifecycle.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
crates/capi/src/lib.rs (1)

1-1: Narrow the missing_safety_doc suppression or document unsafe C API exports.

The crate-wide #![allow(clippy::missing_safety_doc)] at line 1 suppresses safety documentation requirements for all current and future FFI exports in this module. Multiple public extern "C" functions exist without documented safety contracts. Either add # Safety sections to these functions or narrow the allow to specific stubs that intentionally omit documentation. This aligns with Rust best practices for memory safety and error handling at FFI boundaries.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/capi/src/lib.rs` at line 1, The crate-wide attribute
#![allow(clippy::missing_safety_doc)] at the top suppresses safety docs for all
FFI exports; remove or narrow it and either add /// # Safety sections to each
public extern "C" function (any pub extern "C" fn in this module) describing
caller requirements and UB conditions, or replace the crate-level allow with
targeted #[allow(clippy::missing_safety_doc)] on specific stub items that
intentionally omit safety docs; update each FFI export (the pub extern "C"
functions) with concrete safety contracts or restrict the allow to those exact
symbols so future exports require documentation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/capi/src/lib.rs`:
- Line 14: Fix the doc-comment typo in crates/capi/src/lib.rs: change the word
"of" to "if" in the comment for the main interpreter (the comment starting "Get
main interpreter of this process.") so it reads "Get main interpreter of this
process. Will be None if it has not been initialized yet." and ensure the
corrected sentence is the docstring associated with the relevant function or
item that exposes the main interpreter.

---

Nitpick comments:
In `@crates/capi/src/lib.rs`:
- Line 1: The crate-wide attribute #![allow(clippy::missing_safety_doc)] at the
top suppresses safety docs for all FFI exports; remove or narrow it and either
add /// # Safety sections to each public extern "C" function (any pub extern "C"
fn in this module) describing caller requirements and UB conditions, or replace
the crate-level allow with targeted #[allow(clippy::missing_safety_doc)] on
specific stub items that intentionally omit safety docs; update each FFI export
(the pub extern "C" functions) with concrete safety contracts or restrict the
allow to those exact symbols so future exports require documentation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 0f942005-1823-4294-a497-442218c274bb

📥 Commits

Reviewing files that changed from the base of the PR and between 9d0000c and 9346fd0.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • Cargo.toml
  • build.rs
  • crates/capi/Cargo.toml
  • crates/capi/src/lib.rs
  • crates/capi/src/pylifecycle.rs
  • src/lib.rs
✅ Files skipped from review due to trivial changes (1)
  • crates/capi/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/lib.rs
  • build.rs
  • crates/capi/src/pylifecycle.rs

Comment thread crates/capi/src/lib.rs Outdated
@bschoenmaeckers
Copy link
Copy Markdown
Contributor Author

@youknowone I think I have the right setup now for the interpreter. Please let me know what you think.

use rustpython_vm::vm::thread::ThreadedVirtualMachine;

thread_local! {
static VM: RefCell<Option<ThreadedVirtualMachine>> = const { RefCell::new(None) };
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my last concern is here. This value is duplication of RustPython's internal thread vm.
what if a thread is created RustPython inside? then will this be correctly pointing that? Otherwise it doesn't matter because user cannot directly access to the thread via C API?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this situation it will create 2 vm's for the same thread. I'm not sure if that is a problem but I can imagine it will be. So maybe we have to find some other way?

Copy link
Copy Markdown
Contributor Author

@bschoenmaeckers bschoenmaeckers Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the problem now. I might have a solution to this, I will think this over. If you have any insights please don't hesitate to share!

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My last commit addresses this issue. It will now use VM_CURRENT when one is already active. But creates a new vm when a module spawned it own new thread.

See 4c82636

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a test case as well. To make sure we do not create a new VM when there is already one active.

See 650f588

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@youknowone Could you please take another look? 🙏

Comment thread src/lib.rs
let interp = builder.interpreter();
let exitcode = interp.run(move |vm| run_rustpython(vm, run_mode));
let exitcode = cfg_select! {
feature = "capi" => {{
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a question to help determine whether the Python C API needs to be supported in the Python executable itself.

Is this C API intended to be used both for embedding Python as a library and for building Python C extensions?
For the former, I assume we do not need to enable the C API in the executable binary itself. (We can publish a library rather than executable)
For the latter, however, it seems unavoidable.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The C API is intended for both embedding python and for usage in extensions libraries (python packages).

Libraries will be linked to the exposed symbols in the executable when imported by python, so not at compile time. This way a library does not have any python code embedded into it. You can think of this as the client side of the api.

When embedding the python runtime, you essentially take the implementation side (or server if we continue this though experiment) of the api and embed it into the application. This is what the python executable should be doing because it is running python obviously. This way a library can also be loaded in dynamically , and be linked the these symbols.

I hope this clears up your understanding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants