Skip to content

ROX-33361: Per-namespace persistence for process indicators#19455

Open
erthalion wants to merge 2 commits intomasterfrom
feature/process-indicator-per-namespace
Open

ROX-33361: Per-namespace persistence for process indicators#19455
erthalion wants to merge 2 commits intomasterfrom
feature/process-indicator-per-namespace

Conversation

@erthalion
Copy link
Copy Markdown
Contributor

@erthalion erthalion commented Mar 17, 2026

Description

Allow to configure per-namespace persistence for process indicators, so that Central wouldn't need to store information, which never will be used.

It could be configured via DynamicConfig of the cluster configuration in the form:

message RuntimeDataControl {
  string namespace_filter = 1;
  bool exclude_openshift = 2;
  bool persistence = 3;
}

Where namespace_filter allows to specify a custom regex to filter out processes by matching namespace, exclude_openshift instructs Central to exclude anything from openshift-* namespaces, and persistence can be used to disable storing process indicators at all.

The user can interact with this feature via SecuredClusterCR, which looks like this:

spec:
  clusterName: my-cluster
  runtimeDataControl:
    excludeOpenshift: Enabled
    namespaceFilter: test1-.*|test2
    persistence: Enabled

The runtimeDataControl from SecuredClusterCR is propagated to Central on every Sensor restart with the SensorHello message (Sensor restarts on even SecuredClusterCR change).

User-facing documentation

Testing and quality

  • the change is production ready: the change is GA, or otherwise the functionality is gated by a feature flag
  • CI results are inspected

Automated testing

  • added unit tests
  • added e2e tests
  • added regression tests
  • added compatibility tests
  • modified existing tests

How I validated my change

Not validated yet.

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Mar 17, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 2 times, most recently from 51829e9 to b15a83b Compare March 17, 2026 13:14
@rhacs-bot
Copy link
Copy Markdown
Contributor

rhacs-bot commented Mar 17, 2026

Images are ready for the commit at 0473d0e.

To use with deploy scripts, first export MAIN_IMAGE_TAG=4.11.x-529-g0473d0ea47.

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 3 times, most recently from de351e1 to 8105eb3 Compare March 17, 2026 14:46
Comment on lines +369 to +370
suite.deploymentObservationQueue.EXPECT().
Push(gomock.Any())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
suite.deploymentObservationQueue.EXPECT().
Push(gomock.Any())

Comment on lines +375 to +376
suite.cluster.EXPECT().GetCluster(gomock.Any(), clusterId).
Return(clusterNoOpenshift, true, nil)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
suite.cluster.EXPECT().GetCluster(gomock.Any(), clusterId).
Return(clusterNoOpenshift, true, nil)

Comment on lines +382 to +383
err := suite.manager.IndicatorAdded(indicator)
suite.NoError(err)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
err := suite.manager.IndicatorAdded(indicator)
suite.NoError(err)
// Add indicator to queue and flush synchronously to avoid race condition
suite.manager.addToIndicatorQueue(indicator)
suite.manager.flushIndicatorQueue()

@AlexVulaj
Copy link
Copy Markdown
Contributor

IndicatorAdded() spawns flushIndicatorQueue() in a goroutine, so the test completes before it runs, causing the missing call errors and unpredictable exits. Left some suggestions to make the test synchronous and remove mock expectations that aren't actually called.

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch from 8105eb3 to 8d5502b Compare March 18, 2026 10:16
@erthalion
Copy link
Copy Markdown
Contributor Author

IndicatorAdded() spawns flushIndicatorQueue() in a goroutine, so the test completes before it runs, causing the missing call errors and unpredictable exits.

Thanks, it escaped my attention that flushIndicatorQueue is executed in a go routine. It's a shame that there seems to be no easy way to use testing/synctest for this purposes.

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 5 times, most recently from d5149ab to e48fefa Compare March 18, 2026 17:10
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 18, 2026

Codecov Report

❌ Patch coverage is 36.66667% with 38 lines in your changes missing coverage. Please review.
✅ Project coverage is 49.76%. Comparing base (f948ae4) to head (e8583cf).
⚠️ Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
operator/api/v1alpha1/zz_generated.deepcopy.go 0.00% 29 Missing ⚠️
...l/securedcluster/values/translation/translation.go 75.86% 5 Missing and 2 partials ⚠️
operator/api/v1alpha1/securedcluster_types.go 0.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #19455      +/-   ##
==========================================
- Coverage   49.76%   49.76%   -0.01%     
==========================================
  Files        2767     2767              
  Lines      209544   209604      +60     
==========================================
+ Hits       104277   104300      +23     
- Misses      97571    97607      +36     
- Partials     7696     7697       +1     
Flag Coverage Δ
go-unit-tests 49.76% <36.66%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch from 2759996 to 7b03129 Compare March 20, 2026 14:24
@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch from 1221d80 to dca8f1a Compare March 23, 2026 12:16
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 5 issues, and left some high level feedback:

  • The namespace filtering logic in GetNamespaceFilter combined with MatchProcessIndicator looks like it will treat an empty filter string as a valid regex that matches all namespaces, effectively filtering out all indicators when no runtime data control is configured; consider explicitly handling the empty/nil config case (e.g., by returning nil/no regex or a regex that never matches) so that the default behavior is to not filter anything.
  • Related to the above, GetNamespaceFilter assumes that RuntimeDataControl is present and defaults persist=true/excludeOpenshift=false; if dynamic config or its RuntimeDataControl is nil, GetPersistence() and GetExcludeOpenshift() will fall back to protobuf defaults, which may not match the intended defaults—adding explicit defaulting when config == nil would make this more robust.
  • The new Infof logs for Helm config (HelmConfigInit and Set HelmManagedConfigInit) and for filtered indicators in flushIndicatorQueue may be too verbose/noisy at info level and could leak large or sensitive payloads; consider either dropping them or moving them to debug level and tightening the message wording (e.g., log only IDs instead of full structs).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The namespace filtering logic in `GetNamespaceFilter` combined with `MatchProcessIndicator` looks like it will treat an empty filter string as a valid regex that matches all namespaces, effectively filtering out all indicators when no runtime data control is configured; consider explicitly handling the empty/`nil` config case (e.g., by returning `nil`/no regex or a regex that never matches) so that the default behavior is to *not* filter anything.
- Related to the above, `GetNamespaceFilter` assumes that `RuntimeDataControl` is present and defaults persist=true/excludeOpenshift=false; if dynamic config or its `RuntimeDataControl` is `nil`, `GetPersistence()` and `GetExcludeOpenshift()` will fall back to protobuf defaults, which may not match the intended defaults—adding explicit defaulting when `config == nil` would make this more robust.
- The new `Infof` logs for Helm config (`HelmConfigInit` and `Set HelmManagedConfigInit`) and for filtered indicators in `flushIndicatorQueue` may be too verbose/noisy at info level and could leak large or sensitive payloads; consider either dropping them or moving them to debug level and tightening the message wording (e.g., log only IDs instead of full structs).

## Individual Comments

### Comment 1
<location path="pkg/cluster/filtering.go" line_range="14" />
<code_context>
+	noPersistence   = ".*"
+)
+
+func GetNamespaceFilter(cluster *storage.Cluster) string {
+	var config *storage.DynamicClusterConfig_RuntimeDataControl
+
</code_context>
<issue_to_address>
**issue (bug_risk):** Guard against nil dynamic/Helm config to avoid panics when runtime data control is unset

This code assumes `cluster.GetDynamicConfig()`, `cluster.GetHelmConfig()`, `GetDynamicConfig()`, and `GetRuntimeDataControl()` are always non-nil. For clusters without dynamic or Helm config, dereferencing these will panic. Please add nil checks (e.g., early-return an empty filter string) before accessing nested config fields.
</issue_to_address>

### Comment 2
<location path="pkg/cluster/filtering.go" line_range="23-25" />
<code_context>
+		config = cluster.GetHelmConfig().GetDynamicConfig().GetRuntimeDataControl()
+	}
+
+	// Persistence filter has highest priority, since any other
+	// option is only its subset
+	if !config.GetPersistence() {
+		return noPersistence
+	}
</code_context>
<issue_to_address>
**issue (bug_risk):** Using an empty regex as default causes all namespaces to match and thus be filtered out

With `Persistence=true`, empty `NamespaceFilter`, and `ExcludeOpenshift=false`, `GetNamespaceFilter` returns `""`. Since `regexp.MustCompile("")` matches all strings, `MatchProcessIndicator` treats every indicator as matching and filters everything out (you only append when `!match`). This makes the "no filter" default effectively disable persistence. Please either return a never-matching regex for the empty case (e.g. `"a^"`) or treat an empty filter as "no filtering" by skipping regex/cache creation.
</issue_to_address>

### Comment 3
<location path="central/cluster/datastore/datastore_impl.go" line_range="184-185" />
<code_context>

 	for _, c := range clusters {
 		ds.idToNameCache.Add(c.GetId(), c.GetName())
+		ds.idToRegexCache.Add(c.GetId(),
+			regexp.MustCompile(clusterPkg.GetNamespaceFilter(c)))
 		ds.nameToIDCache.Add(c.GetName(), c.GetId())
 		c.HealthStatus = clusterHealthStatuses[c.GetId()]
</code_context>
<issue_to_address>
**issue (bug_risk):** Avoid panic by not using MustCompile on user-supplied regex filters

Since the namespace filter comes from user-controlled config, `regexp.MustCompile` can panic the process on invalid input, causing Central to crash-loop. Use `regexp.Compile` instead and either validate the filter at config admission time or handle errors gracefully (e.g., log and fall back to no filtering).
</issue_to_address>

### Comment 4
<location path="central/cluster/datastore/datastore_impl.go" line_range="349-354" />
<code_context>
+		return false, err
+	}
+
+	filter, ok := ds.idToRegexCache.Get(id)
+	if !ok {
+		return false, nil
+	}
+
+	return filter.(*regexp.Regexp).MatchString(indicator.GetNamespace()), nil
+}
+
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Dropping indicators on MatchProcessIndicator error can silently lose data

Because `flushIndicatorQueue` treats any non-nil error from `MatchProcessIndicator` as “don’t append”, indicators are silently dropped on error. This could occur due to regex compilation issues, cache inconsistencies, or SAC failures. Consider either treating errors as “no filtering” (still append) and/or emitting a metric so these cases are visible. Also, when the SAC check returns `ok == false` with `err == nil`, you currently return `(false, nil)`, which causes the indicator to be appended; if SAC should be enforced, `!ok` should likely be treated as an error instead.

Suggested implementation:

```golang
func (ds *datastoreImpl) MatchProcessIndicator(ctx context.Context,
	indicator *storage.ProcessIndicator) (bool, error) {
	clusterID := indicator.GetClusterId()

	// Enforce SAC: propagate errors and treat !allowed as an error so the caller
	// does not append the indicator on unauthorized access.
	allowed, err := clusterSAC.ReadAllowed(ctx, sac.ClusterScopeKey(clusterID))
	if err != nil {
		// SAC error: surface to caller (caller will NOT append on error).
		return false, err
	}
	if !allowed {
		// SAC denied with no error: treat as an explicit error so caller does not append.
		return false, errors.New("process indicator access denied by SAC")
	}

	// Look up the compiled regex for this cluster. Cache miss means no filter
	// is configured: treat as "no filtering" and allow the indicator through.
	value, ok := ds.idToRegexCache.Get(clusterID)
	if !ok {
		return true, nil
	}

	// If the cached value is not a *regexp.Regexp, treat this as an internal
	// inconsistency: emit a metric/log (elsewhere) but do not drop the indicator.
	regex, ok := value.(*regexp.Regexp)
	if !ok {
		return true, nil
	}

	// Apply the filter to the indicator's namespace.
	return regex.MatchString(indicator.GetNamespace()), nil
}

	for _, c := range clusters {

```

1. Ensure `central/cluster/datastore/datastore_impl.go` imports the `errors` and `regexp` packages if they are not already imported:
   - `import "errors"`
   - `import "regexp"`
2. To avoid silent internal failures (cache inconsistencies, bad regex, etc.), add logging and/or metrics where indicated in comments:
   - On the `!ok` type assertion branch (`if !ok { return true, nil }`), increment a counter and/or log a warning so these cases are visible operationally.
3. Verify that `flushIndicatorQueue` (or the caller) still treats `err != nil` as “do not append”, which now only occurs for SAC errors/denials; other internal issues are treated as “no filtering” and will still append.
</issue_to_address>

### Comment 5
<location path="central/sensor/service/service_impl.go" line_range="275-276" />
<code_context>

 func (s *serviceImpl) getClusterForConnection(sensorHello *central.SensorHello, serviceID *storage.ServiceIdentity) (*storage.Cluster, error) {
 	helmConfigInit := sensorHello.GetHelmManagedConfigInit()
+	log.Infof("HelmConfigInit %+v", helmConfigInit)

 	clusterIDFromCert := serviceID.GetId()
</code_context>
<issue_to_address>
**🚨 issue (security):** Logging full Helm managed config may expose sensitive information

Using `%+v` here will log the full Helm-managed configuration, which may contain sensitive or environment-specific data. Please either remove this log, restrict it to non-sensitive identifiers, or implement redaction/structured logging for any potentially sensitive fields.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread pkg/cluster/filtering.go Outdated
Comment thread pkg/cluster/filtering.go Outdated
Comment thread central/cluster/datastore/datastore_impl.go Outdated
Comment thread central/cluster/datastore/datastore_impl.go Outdated
Comment thread central/sensor/service/service_impl.go Outdated
@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 2 times, most recently from 62f6517 to 2a2dd0d Compare April 8, 2026 14:16
@erthalion
Copy link
Copy Markdown
Contributor Author

/test gke-ui-e2e-tests ocp-4-21-qa-e2e-tests

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 6 times, most recently from eaf8c74 to 7dd25d5 Compare April 10, 2026 13:50
@erthalion
Copy link
Copy Markdown
Contributor Author

/test ocp-4-12-operator-e2e-tests

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 2 times, most recently from c304f03 to b7b14e7 Compare April 13, 2026 11:18
@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 2 times, most recently from 9ee731d to 9efd3a9 Compare April 16, 2026 09:08
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
central/cluster/datastore/datastore_impl_test.go (1)

838-905: Test is coupled to cache internals and has a misleading comment.

Two small points:

  1. The test populates idToRegexCache directly via regexp.MustCompile(*filter) rather than exercising the production path (e.g., updateClusterNoLock) that compiles and stores the regex. This means if GetNamespaceFilter ever returns nil for a valid config the test will panic, and more importantly the test doesn't validate that the production compile-and-cache path stays consistent with what MatchProcessIndicator expects.
  2. Lines 885 and 898 both say "update the cache to take persistence into account", but line 883 toggles ExcludeOpenshiftNs, not persistence. Consider fixing the comment for the ExcludeOpenshiftNs case.

Optionally, drive the cache through updateClusterNoLock (you already verify that flow in TestUpdateCluster) to reduce coupling to the internal representation.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@central/cluster/datastore/datastore_impl_test.go`:
- Around line 938-967: The updateClusterNoLock flow currently ignores regex
compile failures for
Cluster.HelmConfig.DynamicConfig.ProcessIndicators.ExcludeNamespaceFilter and
leaves idToRegexCache unset; modify updateClusterNoLock in datastore_impl.go to
capture the regex compile error when building the exclude filter from
ExcludeNamespaceFilter, and at minimum call the datastore logger (e.g.,
logger.Warnf or logger.Errorf) including the cluster ID and the compile error
message so failures are visible; additionally, consider returning a validation
error from updateClusterNoLock (propagate the error instead of returning nil) so
callers can surface misconfiguration.

In `@deploy/common/k8sbased.sh`:
- Around line 912-915: The append to extra_json_dynamic_config inside the
SENSOR_HELM_DEPLOY=="true" branch is dead (only extra_helm_config --set is used
for Helm) and also introduces a missing comma producing invalid JSON if ever
consumed; remove the line that mutates extra_json_dynamic_config in that Helm
branch so only extra_helm_config+=(--set
"processIndicators.namespaceFilter=namespace-without-persistence") remains, or
alternatively move the append out of the Helm conditional and apply the same
comma-guard pattern used elsewhere (check the if [[ -n
"$extra_json_dynamic_config" ]]; then ... fi pattern) so
extra_json_dynamic_config stays valid when used by the non-Helm dynamicConfig
assembly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 5dcfc146-72ff-42f4-b062-a0b25c9e59c6

📥 Commits

Reviewing files that changed from the base of the PR and between d852012 and 9efd3a9.

⛔ Files ignored due to path filters (3)
  • generated/api/v1/cluster_service.swagger.json is excluded by !**/generated/**
  • generated/storage/cluster.pb.go is excluded by !**/*.pb.go, !**/generated/**
  • generated/storage/cluster_vtproto.pb.go is excluded by !**/*.pb.go, !**/generated/**
📒 Files selected for processing (11)
  • central/cluster/datastore/datastore.go
  • central/cluster/datastore/datastore_impl.go
  • central/cluster/datastore/datastore_impl_test.go
  • central/cluster/datastore/mocks/datastore.go
  • central/detection/lifecycle/manager_impl.go
  • central/detection/lifecycle/manager_impl_test.go
  • central/graphql/resolvers/generated.go
  • deploy/common/k8sbased.sh
  • image/templates/helm/stackrox-secured-cluster/internal/cluster-config.yaml.tpl.htpl
  • image/templates/helm/stackrox-secured-cluster/internal/config-shape.yaml
  • image/templates/helm/stackrox-secured-cluster/internal/defaults/30-base-config.yaml.htpl
✅ Files skipped from review due to trivial changes (1)
  • image/templates/helm/stackrox-secured-cluster/internal/config-shape.yaml
🚧 Files skipped from review as they are similar to previous changes (5)
  • image/templates/helm/stackrox-secured-cluster/internal/cluster-config.yaml.tpl.htpl
  • image/templates/helm/stackrox-secured-cluster/internal/defaults/30-base-config.yaml.htpl
  • central/detection/lifecycle/manager_impl.go
  • central/cluster/datastore/datastore_impl.go
  • central/detection/lifecycle/manager_impl_test.go

Comment on lines +938 to +967
func (s *clusterDataStoreTestSuite) TestUpdateClusterIncorrectFilter() {
clusterID := fixtureconsts.Cluster1
testCluster := &storage.Cluster{
Id: clusterID,
Name: "test",
ManagedBy: storage.ManagerType_MANAGER_TYPE_HELM_CHART,
MainImage: "docker.io/stackrox/rox:latest",
HelmConfig: &storage.CompleteClusterConfig{
DynamicConfig: &storage.DynamicClusterConfig{
ProcessIndicators: &storage.DynamicClusterConfig_ProcessIndicatorsConfig{
ExcludeNamespaceFilter: "[",
NoPersistence: false,
},
},
},
}

ctx := sac.WithAllAccess(s.T().Context())

s.clusterStore.EXPECT().
Upsert(gomock.Any(), gomock.Any()).
Return(nil)

err := s.datastore.updateClusterNoLock(ctx, testCluster)
s.NoError(err)

filter, ok := s.datastore.idToRegexCache.Get(clusterID)
assert.False(s.T(), ok)
assert.Nil(s.T(), filter)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Invalid regex is silently ignored — consider surfacing an error.

The test pins the current behavior: when ExcludeNamespaceFilter is invalid ("["), updateClusterNoLock returns nil and simply leaves idToRegexCache unpopulated. Since this filter is user-supplied via SecuredClusterCR.runtimeDataControl.namespaceFilter, silent failure means an operator typo leads to no indicators being filtered with zero feedback. At minimum, ensure the regex-compile failure is logged with the cluster ID at warn/error level (verify in datastore_impl.go), and consider returning a validation error (or surfacing it on the SecuredCluster status) so misconfiguration is discoverable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@central/cluster/datastore/datastore_impl_test.go` around lines 938 - 967, The
updateClusterNoLock flow currently ignores regex compile failures for
Cluster.HelmConfig.DynamicConfig.ProcessIndicators.ExcludeNamespaceFilter and
leaves idToRegexCache unset; modify updateClusterNoLock in datastore_impl.go to
capture the regex compile error when building the exclude filter from
ExcludeNamespaceFilter, and at minimum call the datastore logger (e.g.,
logger.Warnf or logger.Errorf) including the cluster ID and the compile error
message so failures are visible; additionally, consider returning a validation
error from updateClusterNoLock (propagate the error instead of returning nil) so
callers can surface misconfiguration.

Comment thread deploy/common/k8sbased.sh Outdated
@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch from 9efd3a9 to 5d70b6b Compare April 20, 2026 09:40
@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch 2 times, most recently from d0f3a73 to d39628f Compare April 20, 2026 10:56
Comment thread deploy/common/k8sbased.sh Outdated
kubectl -n "${sensor_namespace}" get secret stackrox &>/dev/null || kubectl -n "${sensor_namespace}" create -f - < <("${common_dir}/pull-secret.sh" stackrox docker.io)
fi

# Per-namespace filtering tests expect to have one namespace configured
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if possible it would be preferable to set this in one of the test lib.sh files or something along those lines.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense.

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch from d39628f to 44392d1 Compare April 21, 2026 13:42
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
qa-tests-backend/src/test/groovy/ProcessVisualizationTest.groovy (1)

354-367: ⚠️ Potential issue | 🟠 Major

Avoid the full 4-minute wait on the passing path.

The expected no-process case always waits MAX_SLEEP_TIME because the loop only breaks when processes appear. Use a shorter negative-check timeout or another readiness signal before this assertion.

Proposed adjustment
     static final private MAX_SLEEP_TIME = 240000
+    static final private NO_PERSISTENCE_MAX_SLEEP_TIME = 30000
     static final private SLEEP_INCREMENT = 5000
-        int retries = MAX_SLEEP_TIME / SLEEP_INCREMENT
+        int retries = NO_PERSISTENCE_MAX_SLEEP_TIME / SLEEP_INCREMENT
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@qa-tests-backend/src/test/groovy/ProcessVisualizationTest.groovy` around
lines 354 - 367, The test currently always waits the full MAX_SLEEP_TIME because
the loop in ProcessVisualizationTest only breaks when
ProcessService.getUniqueProcessPaths(uid) returns >0; change the negative-check
path to use a shorter timeout: introduce a separate shortRetryCount (e.g.
SHORT_MAX_SLEEP / SLEEP_INCREMENT) or a boolean flag for the "expect no process"
case and construct Timer t = new Timer(shortRetries, shortDelay) when that case
applies, then loop using t.IsValid() and call
ProcessService.getUniqueProcessPaths(uid) as before; this ensures the test does
not block for MAX_SLEEP_TIME on the passing (no-process) path while keeping the
same polling logic.
🧹 Nitpick comments (3)
tests/e2e/lib.sh (1)

416-419: Respect explicit feature-flag overrides.

This forces the flag on for every Sensor deployment, so jobs cannot validate disabled or compatibility behavior by setting ROX_PROCESS_INDICATORS_PER_NAMESPACE=false. Default it to true while preserving caller input.

Proposed adjustment
-    ci_export ROX_PROCESS_INDICATORS_PER_NAMESPACE "true"
+    ci_export ROX_PROCESS_INDICATORS_PER_NAMESPACE "${ROX_PROCESS_INDICATORS_PER_NAMESPACE:-true}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/lib.sh` around lines 416 - 419, The current unconditional ci_export
ROX_PROCESS_INDICATORS_PER_NAMESPACE "true" forces the flag for all runs; change
it to only set the default when the caller hasn't provided an explicit value so
callers can override with ROX_PROCESS_INDICATORS_PER_NAMESPACE=false. Update the
logic around the ci_export call in tests/e2e/lib.sh to check whether
ROX_PROCESS_INDICATORS_PER_NAMESPACE is already set (use the existing ci_export
helper/variable-check pattern) and only export "true" when it is undefined or
empty, leaving any caller-provided value intact.
operator/config/manifests/bases/rhacs-operator.clusterserviceversion.yaml (1)

1347-1351: displayName should be human-friendly.

displayName: ProcessIndicators renders without a space in the UI, unlike surrounding groups ("Sensor Settings", "Per Node Settings", etc.). Consider "Process Indicators" (or "Process Indicators Settings" for consistency with other group-level descriptors).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@operator/config/manifests/bases/rhacs-operator.clusterserviceversion.yaml`
around lines 1347 - 1351, Update the group displayName for the ProcessIndicators
group so it is human-friendly and consistent with other groups: change the
displayName value associated with the block that has path: processIndicators and
x-descriptors: [urn:alm:descriptor:com.tectonic.ui:advanced] (the current
displayName: ProcessIndicators) to a spaced, readable label such as "Process
Indicators" or "Process Indicators Settings".
operator/api/v1alpha1/securedcluster_types.go (1)

534-537: Missing godoc on exported constants.

ProcessIndicatorConfigEnabled and ProcessIndicatorConfigDisabled are exported but lack doc comments, unlike the sibling enums in this file (e.g., ProcessBaselinesAutoLockModeEnabled). This will trip revive/golint exported-comment checks.

Proposed fix
 const (
-	ProcessIndicatorConfigEnabled  ProcessIndicatorConfigSwitch = "Enabled"
-	ProcessIndicatorConfigDisabled ProcessIndicatorConfigSwitch = "Disabled"
+	// ProcessIndicatorConfigEnabled means the corresponding ProcessIndicators knob is enabled.
+	ProcessIndicatorConfigEnabled ProcessIndicatorConfigSwitch = "Enabled"
+	// ProcessIndicatorConfigDisabled means the corresponding ProcessIndicators knob is disabled.
+	ProcessIndicatorConfigDisabled ProcessIndicatorConfigSwitch = "Disabled"
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@operator/api/v1alpha1/securedcluster_types.go` around lines 534 - 537, Add
exported godoc comments for the constants ProcessIndicatorConfigEnabled and
ProcessIndicatorConfigDisabled to satisfy lint rules; mirror the style used for
sibling enums (e.g., ProcessBaselinesAutoLockModeEnabled) by placing a short
descriptive comment above each constant that explains what the constant
represents (e.g., that it toggles the process indicator configuration to
Enabled/Disabled) and ensure the comment starts with the constant name exactly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@operator/api/v1alpha1/securedcluster_types.go`:
- Around line 511-526: Update the doc comment and field description to follow
godoc and correct semantics: change the top comment to begin with
"ProcessIndicatorsSpec" (instead of "ProcessIndicators") and reword the
ExcludeOpenshiftNs field comment to read something like "Whether to exclude
process indicators originating from openshift namespaces from persistence." Also
verify and update related CSV/CRD annotations so the operator UI matches the
corrected meaning for ExcludeOpenshiftNs; ensure other field comments
(Persistence, ExcludeNamespaceFilter) remain accurate and regenerate the CSV/CRD
manifests.

In `@operator/config/crd/bases/platform.stackrox.io_securedclusters.yaml`:
- Around line 1011-1018: The description for the CRD field excludeOpenshiftNs is
misleading (it currently says “Whether to persist…”) and should be changed to
explicitly state that Enabled means OpenShift namespaces are excluded; update
the description for excludeOpenshiftNs to something like “Whether to exclude
process indicators originating from OpenShift namespaces. When Enabled, process
indicators from OpenShift namespaces will not be persisted. The default is:
Disabled.” so the semantics match the field name and enum values.

In `@operator/config/manifests/bases/rhacs-operator.clusterserviceversion.yaml`:
- Around line 1135-1142: The user-facing description for the exclude switch is
inverted; update the CSV description for path
processIndicators.excludeOpenshiftNs to clearly state that enabling this option
excludes (does not persist) process indicators from OpenShift namespaces (e.g.,
"Whether to exclude process indicators originating from OpenShift namespaces
from persistence") and then update the Go struct field comment for
ExcludeOpenshiftNs in operator/api/v1alpha1/securedcluster_types.go to the same
corrected wording and regenerate the CSV so both sources are consistent; target
the YAML entry under displayName "Exclude Openshift Ns" and the Go field
ExcludeOpenshiftNs when making the edits.

In `@operator/internal/securedcluster/values/translation/translation.go`:
- Around line 592-609: The current comparisons in translation.go that set
cv.SetBoolValue for "noPersistence" and "excludeOpenshiftNs" assume any
non-matching enum is false; instead validate the incoming enums explicitly and
reject unknown values: for processIndicators.Persistence ensure the pointer is
either platform.ProcessIndicatorConfigDisabled or
platform.ProcessIndicatorConfigEnabled and return an error if it is neither
before calling cv.SetBoolValue("noPersistence", *processIndicators.Persistence
== platform.ProcessIndicatorConfigDisabled); similarly for
processIndicators.ExcludeOpenshiftNs, ensure the value is either
platform.ProcessIndicatorConfigEnabled or
platform.ProcessIndicatorConfigDisabled and return an error on unknown enum
before calling cv.SetBoolValue("excludeOpenshiftNs",
*processIndicators.ExcludeOpenshiftNs ==
platform.ProcessIndicatorConfigEnabled).

---

Duplicate comments:
In `@qa-tests-backend/src/test/groovy/ProcessVisualizationTest.groovy`:
- Around line 354-367: The test currently always waits the full MAX_SLEEP_TIME
because the loop in ProcessVisualizationTest only breaks when
ProcessService.getUniqueProcessPaths(uid) returns >0; change the negative-check
path to use a shorter timeout: introduce a separate shortRetryCount (e.g.
SHORT_MAX_SLEEP / SLEEP_INCREMENT) or a boolean flag for the "expect no process"
case and construct Timer t = new Timer(shortRetries, shortDelay) when that case
applies, then loop using t.IsValid() and call
ProcessService.getUniqueProcessPaths(uid) as before; this ensures the test does
not block for MAX_SLEEP_TIME on the passing (no-process) path while keeping the
same polling logic.

---

Nitpick comments:
In `@operator/api/v1alpha1/securedcluster_types.go`:
- Around line 534-537: Add exported godoc comments for the constants
ProcessIndicatorConfigEnabled and ProcessIndicatorConfigDisabled to satisfy lint
rules; mirror the style used for sibling enums (e.g.,
ProcessBaselinesAutoLockModeEnabled) by placing a short descriptive comment
above each constant that explains what the constant represents (e.g., that it
toggles the process indicator configuration to Enabled/Disabled) and ensure the
comment starts with the constant name exactly.

In `@operator/config/manifests/bases/rhacs-operator.clusterserviceversion.yaml`:
- Around line 1347-1351: Update the group displayName for the ProcessIndicators
group so it is human-friendly and consistent with other groups: change the
displayName value associated with the block that has path: processIndicators and
x-descriptors: [urn:alm:descriptor:com.tectonic.ui:advanced] (the current
displayName: ProcessIndicators) to a spaced, readable label such as "Process
Indicators" or "Process Indicators Settings".

In `@tests/e2e/lib.sh`:
- Around line 416-419: The current unconditional ci_export
ROX_PROCESS_INDICATORS_PER_NAMESPACE "true" forces the flag for all runs; change
it to only set the default when the caller hasn't provided an explicit value so
callers can override with ROX_PROCESS_INDICATORS_PER_NAMESPACE=false. Update the
logic around the ci_export call in tests/e2e/lib.sh to check whether
ROX_PROCESS_INDICATORS_PER_NAMESPACE is already set (use the existing ci_export
helper/variable-check pattern) and only export "true" when it is undefined or
empty, leaving any caller-provided value intact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 68f1f4a0-64e4-4b48-b8e9-ed8f51e58cbc

📥 Commits

Reviewing files that changed from the base of the PR and between 9efd3a9 and 44392d1.

📒 Files selected for processing (17)
  • deploy/common/k8sbased.sh
  • image/templates/helm/stackrox-secured-cluster/internal/cluster-config.yaml.tpl.htpl
  • image/templates/helm/stackrox-secured-cluster/internal/config-shape.yaml
  • image/templates/helm/stackrox-secured-cluster/internal/defaults/30-base-config.yaml.htpl
  • operator/api/v1alpha1/securedcluster_types.go
  • operator/api/v1alpha1/zz_generated.deepcopy.go
  • operator/bundle/manifests/platform.stackrox.io_securedclusters.yaml
  • operator/bundle/manifests/rhacs-operator.clusterserviceversion.yaml
  • operator/config/crd/bases/platform.stackrox.io_securedclusters.yaml
  • operator/config/manifests/bases/rhacs-operator.clusterserviceversion.yaml
  • operator/internal/securedcluster/defaults/static.go
  • operator/internal/securedcluster/values/translation/translation.go
  • operator/internal/securedcluster/values/translation/translation_test.go
  • pkg/features/list.go
  • qa-tests-backend/src/test/groovy/ProcessVisualizationTest.groovy
  • tests/e2e/lib.sh
  • tests/e2e/yaml/secured-cluster-cr.envsubst.yaml
✅ Files skipped from review due to trivial changes (3)
  • tests/e2e/yaml/secured-cluster-cr.envsubst.yaml
  • operator/internal/securedcluster/defaults/static.go
  • image/templates/helm/stackrox-secured-cluster/internal/config-shape.yaml
🚧 Files skipped from review as they are similar to previous changes (7)
  • image/templates/helm/stackrox-secured-cluster/internal/cluster-config.yaml.tpl.htpl
  • pkg/features/list.go
  • operator/internal/securedcluster/values/translation/translation_test.go
  • deploy/common/k8sbased.sh
  • image/templates/helm/stackrox-secured-cluster/internal/defaults/30-base-config.yaml.htpl
  • operator/bundle/manifests/rhacs-operator.clusterserviceversion.yaml
  • operator/bundle/manifests/platform.stackrox.io_securedclusters.yaml

Comment on lines +511 to +526
// ProcessIndicators defines per-namespace filtering configuration for Process Indicators.
type ProcessIndicatorsSpec struct {
// Whether to persist individual process indicators.
// The default is: Enabled.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=1,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
Persistence *ProcessIndicatorConfigSwitch `json:"persistence,omitempty"`

// Whether to persist process indicators originating from openshift namespaces.
// The default is: Disabled.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=2,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
ExcludeOpenshiftNs *ProcessIndicatorConfigSwitch `json:"excludeOpenshiftNs,omitempty"`

// A regex specifying to not persist process indicators from specified namespaces.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=3
ExcludeNamespaceFilter *string `json:"excludeNamespaceFilter,omitempty"`
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Doc comment name mismatch and inverted ExcludeOpenshiftNs description.

  • Line 511: the doc comment says "ProcessIndicators defines ..." but the type is ProcessIndicatorsSpec. Godoc convention is for the comment to start with the identifier it documents.
  • Lines 518–521: description "Whether to persist process indicators originating from openshift namespaces" is the inverse of what the field actually controls. The field is ExcludeOpenshiftNs, so Enabled means exclude (do not persist). Please reword to "Whether to exclude process indicators originating from openshift namespaces from persistence." and regenerate the CSV/CRD so the operand UI matches.
Proposed fix
-// ProcessIndicators defines per-namespace filtering configuration for Process Indicators.
+// ProcessIndicatorsSpec defines per-namespace filtering configuration for Process Indicators.
 type ProcessIndicatorsSpec struct {
 	// Whether to persist individual process indicators.
 	// The default is: Enabled.
 	//+operator-sdk:csv:customresourcedefinitions:type=spec,order=1,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
 	Persistence *ProcessIndicatorConfigSwitch `json:"persistence,omitempty"`
 
-	// Whether to persist process indicators originating from openshift namespaces.
-	// The default is: Disabled.
+	// Whether to exclude process indicators originating from openshift namespaces from persistence.
+	// The default is: Disabled.
 	//+operator-sdk:csv:customresourcedefinitions:type=spec,order=2,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
 	ExcludeOpenshiftNs *ProcessIndicatorConfigSwitch `json:"excludeOpenshiftNs,omitempty"`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// ProcessIndicators defines per-namespace filtering configuration for Process Indicators.
type ProcessIndicatorsSpec struct {
// Whether to persist individual process indicators.
// The default is: Enabled.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=1,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
Persistence *ProcessIndicatorConfigSwitch `json:"persistence,omitempty"`
// Whether to persist process indicators originating from openshift namespaces.
// The default is: Disabled.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=2,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
ExcludeOpenshiftNs *ProcessIndicatorConfigSwitch `json:"excludeOpenshiftNs,omitempty"`
// A regex specifying to not persist process indicators from specified namespaces.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=3
ExcludeNamespaceFilter *string `json:"excludeNamespaceFilter,omitempty"`
}
// ProcessIndicatorsSpec defines per-namespace filtering configuration for Process Indicators.
type ProcessIndicatorsSpec struct {
// Whether to persist individual process indicators.
// The default is: Enabled.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=1,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
Persistence *ProcessIndicatorConfigSwitch `json:"persistence,omitempty"`
// Whether to exclude process indicators originating from openshift namespaces from persistence.
// The default is: Disabled.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=2,xDescriptors={"urn:alm:descriptor:com.tectonic.ui:select:Enabled", "urn:alm:descriptor:com.tectonic.ui:select:Disabled"}
ExcludeOpenshiftNs *ProcessIndicatorConfigSwitch `json:"excludeOpenshiftNs,omitempty"`
// A regex specifying to not persist process indicators from specified namespaces.
//+operator-sdk:csv:customresourcedefinitions:type=spec,order=3
ExcludeNamespaceFilter *string `json:"excludeNamespaceFilter,omitempty"`
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@operator/api/v1alpha1/securedcluster_types.go` around lines 511 - 526, Update
the doc comment and field description to follow godoc and correct semantics:
change the top comment to begin with "ProcessIndicatorsSpec" (instead of
"ProcessIndicators") and reword the ExcludeOpenshiftNs field comment to read
something like "Whether to exclude process indicators originating from openshift
namespaces from persistence." Also verify and update related CSV/CRD annotations
so the operator UI matches the corrected meaning for ExcludeOpenshiftNs; ensure
other field comments (Persistence, ExcludeNamespaceFilter) remain accurate and
regenerate the CSV/CRD manifests.

Comment on lines +1011 to +1018
excludeOpenshiftNs:
description: |-
Whether to persist process indicators originating from openshift namespaces.
The default is: Disabled.
enum:
- Enabled
- Disabled
type: string
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify that Enabled means excluding OpenShift namespaces.

The current wording says “whether to persist”, which inverts the meaning of excludeOpenshiftNs and can cause users to choose the wrong setting.

Proposed wording fix
-                    description: |-
-                      Whether to persist process indicators originating from openshift namespaces.
+                    description: |-
+                      Whether to exclude process indicators originating from OpenShift namespaces from persistence.
                       The default is: Disabled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@operator/config/crd/bases/platform.stackrox.io_securedclusters.yaml` around
lines 1011 - 1018, The description for the CRD field excludeOpenshiftNs is
misleading (it currently says “Whether to persist…”) and should be changed to
explicitly state that Enabled means OpenShift namespaces are excluded; update
the description for excludeOpenshiftNs to something like “Whether to exclude
process indicators originating from OpenShift namespaces. When Enabled, process
indicators from OpenShift namespaces will not be persisted. The default is:
Disabled.” so the semantics match the field name and enum values.

Comment on lines +1135 to +1142
- description: |-
Whether to persist process indicators originating from openshift namespaces.
The default is: Disabled.
displayName: Exclude Openshift Ns
path: processIndicators.excludeOpenshiftNs
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:select:Enabled
- urn:alm:descriptor:com.tectonic.ui:select:Disabled
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inverted/confusing description for excludeOpenshiftNs.

The description reads "Whether to persist process indicators originating from openshift namespaces" but the field is an exclude switch: setting it to Enabled means these indicators are excluded (not persisted). As worded, Enabled reads like "persist them", which is the opposite of the actual behavior. Please rephrase, e.g. "Whether to exclude process indicators originating from openshift namespaces from persistence."

The same wording issue exists in the Go type description in operator/api/v1alpha1/securedcluster_types.go (around the ExcludeOpenshiftNs field) — fixing the root description there and regenerating will align both.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@operator/config/manifests/bases/rhacs-operator.clusterserviceversion.yaml`
around lines 1135 - 1142, The user-facing description for the exclude switch is
inverted; update the CSV description for path
processIndicators.excludeOpenshiftNs to clearly state that enabling this option
excludes (does not persist) process indicators from OpenShift namespaces (e.g.,
"Whether to exclude process indicators originating from OpenShift namespaces
from persistence") and then update the Go struct field comment for
ExcludeOpenshiftNs in operator/api/v1alpha1/securedcluster_types.go to the same
corrected wording and regenerate the CSV so both sources are consistent; target
the YAML entry under displayName "Exclude Openshift Ns" and the Go field
ExcludeOpenshiftNs when making the edits.

Comment on lines +592 to +609
if processIndicators.Persistence != nil {
// Note that we reverse the logic here: SecuredClusterCR comes with the
// field Persistence, while the ProcessIndicatorsConfig has
// "no_persistence" field. The reason is that there is no way to
// specify default values for protobuf, thus we need to stick with a
// false as a default for booleans, hence "no_persistence" at the
// config level. But at the CRD level as a user interface it's easier
// to operate with a positive meaning, thus SecuredClusterCR has
// Persistence. It's somewhat tricky, but below is the only place where
// it might be relevant.
cv.SetBoolValue("noPersistence",
*processIndicators.Persistence == platform.ProcessIndicatorConfigDisabled)
}

if processIndicators.ExcludeOpenshiftNs != nil {
cv.SetBoolValue("excludeOpenshiftNs",
*processIndicators.ExcludeOpenshiftNs == platform.ProcessIndicatorConfigEnabled)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reject invalid process-indicator switch values.

Unknown enum strings currently fall through as false, which silently enables persistence or disables OpenShift exclusion. Match the stricter validation pattern used by the other translators.

Proposed validation
 	if processIndicators.Persistence != nil {
-		// Note that we reverse the logic here: SecuredClusterCR comes with the
-		// field Persistence, while the ProcessIndicatorsConfig has
-		// "no_persistence" field. The reason is that there is no way to
-		// specify default values for protobuf, thus we need to stick with a
-		// false as a default for booleans, hence "no_persistence" at the
-		// config level. But at the CRD level as a user interface it's easier
-		// to operate with a positive meaning, thus SecuredClusterCR has
-		// Persistence. It's somewhat tricky, but below is the only place where
-		// it might be relevant.
-		cv.SetBoolValue("noPersistence",
-			*processIndicators.Persistence == platform.ProcessIndicatorConfigDisabled)
+		switch *processIndicators.Persistence {
+		case platform.ProcessIndicatorConfigDisabled:
+			cv.SetBoolValue("noPersistence", true)
+		case platform.ProcessIndicatorConfigEnabled:
+			cv.SetBoolValue("noPersistence", false)
+		default:
+			return cv.SetError(errors.Errorf("invalid spec.processIndicators.persistence setting %q", *processIndicators.Persistence))
+		}
 	}
 
 	if processIndicators.ExcludeOpenshiftNs != nil {
-		cv.SetBoolValue("excludeOpenshiftNs",
-			*processIndicators.ExcludeOpenshiftNs == platform.ProcessIndicatorConfigEnabled)
+		switch *processIndicators.ExcludeOpenshiftNs {
+		case platform.ProcessIndicatorConfigEnabled:
+			cv.SetBoolValue("excludeOpenshiftNs", true)
+		case platform.ProcessIndicatorConfigDisabled:
+			cv.SetBoolValue("excludeOpenshiftNs", false)
+		default:
+			return cv.SetError(errors.Errorf("invalid spec.processIndicators.excludeOpenshiftNs setting %q", *processIndicators.ExcludeOpenshiftNs))
+		}
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if processIndicators.Persistence != nil {
// Note that we reverse the logic here: SecuredClusterCR comes with the
// field Persistence, while the ProcessIndicatorsConfig has
// "no_persistence" field. The reason is that there is no way to
// specify default values for protobuf, thus we need to stick with a
// false as a default for booleans, hence "no_persistence" at the
// config level. But at the CRD level as a user interface it's easier
// to operate with a positive meaning, thus SecuredClusterCR has
// Persistence. It's somewhat tricky, but below is the only place where
// it might be relevant.
cv.SetBoolValue("noPersistence",
*processIndicators.Persistence == platform.ProcessIndicatorConfigDisabled)
}
if processIndicators.ExcludeOpenshiftNs != nil {
cv.SetBoolValue("excludeOpenshiftNs",
*processIndicators.ExcludeOpenshiftNs == platform.ProcessIndicatorConfigEnabled)
}
if processIndicators.Persistence != nil {
switch *processIndicators.Persistence {
case platform.ProcessIndicatorConfigDisabled:
cv.SetBoolValue("noPersistence", true)
case platform.ProcessIndicatorConfigEnabled:
cv.SetBoolValue("noPersistence", false)
default:
return cv.SetError(errors.Errorf("invalid spec.processIndicators.persistence setting %q", *processIndicators.Persistence))
}
}
if processIndicators.ExcludeOpenshiftNs != nil {
switch *processIndicators.ExcludeOpenshiftNs {
case platform.ProcessIndicatorConfigEnabled:
cv.SetBoolValue("excludeOpenshiftNs", true)
case platform.ProcessIndicatorConfigDisabled:
cv.SetBoolValue("excludeOpenshiftNs", false)
default:
return cv.SetError(errors.Errorf("invalid spec.processIndicators.excludeOpenshiftNs setting %q", *processIndicators.ExcludeOpenshiftNs))
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@operator/internal/securedcluster/values/translation/translation.go` around
lines 592 - 609, The current comparisons in translation.go that set
cv.SetBoolValue for "noPersistence" and "excludeOpenshiftNs" assume any
non-matching enum is false; instead validate the incoming enums explicitly and
reject unknown values: for processIndicators.Persistence ensure the pointer is
either platform.ProcessIndicatorConfigDisabled or
platform.ProcessIndicatorConfigEnabled and return an error if it is neither
before calling cv.SetBoolValue("noPersistence", *processIndicators.Persistence
== platform.ProcessIndicatorConfigDisabled); similarly for
processIndicators.ExcludeOpenshiftNs, ensure the value is either
platform.ProcessIndicatorConfigEnabled or
platform.ProcessIndicatorConfigDisabled and return an error on unknown enum
before calling cv.SetBoolValue("excludeOpenshiftNs",
*processIndicators.ExcludeOpenshiftNs ==
platform.ProcessIndicatorConfigEnabled).

@erthalion erthalion force-pushed the feature/process-indicator-per-namespace branch from 44392d1 to e8583cf Compare April 21, 2026 13:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants