ROX-32714: Enable FlattenImageData feature flag#20109
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
This change is part of the following stack: Change managed by git-spice. |
|
Skipping CI for Draft Pull Request. |
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- In
migrate,upsertErrorsis a nil*multierror.Errorwhen there are no errors, so callingupsertErrors.ErrorOrNil()without a nil check will panic; guard withif upsertErrors != nil && upsertErrors.ErrorOrNil() != nil { ... }before logging. - In
populateContainerImageIDV2s,container.GetImage()may return nil for some stored deployments, so accessingimg.GetIdV2()/img.GetName()without a nil check risks a panic; consider skipping containers with nilImage.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `migrate`, `upsertErrors` is a nil `*multierror.Error` when there are no errors, so calling `upsertErrors.ErrorOrNil()` without a nil check will panic; guard with `if upsertErrors != nil && upsertErrors.ErrorOrNil() != nil { ... }` before logging.
- In `populateContainerImageIDV2s`, `container.GetImage()` may return nil for some stored deployments, so accessing `img.GetIdV2()`/`img.GetName()` without a nil check risks a panic; consider skipping containers with nil `Image`.
## Individual Comments
### Comment 1
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_impl.go" line_range="32-41" />
<code_context>
+ db := database.PostgresDB
+ ctx := database.DBCtx
+
+ updatedCount := 0
+ var upsertErrors *multierror.Error
+
+ // Paginate by deployment ID to guarantee forward progress even if some
+ // deployments consistently fail to update.
+ lastID := uuid.Nil.String()
+ for {
+ batchUpdated, newLastID, err := migrateBatch(ctx, db, lastID, &upsertErrors)
+ if err != nil {
+ return err
+ }
+ if newLastID == "" {
+ break
+ }
+ lastID = newLastID
+ updatedCount += batchUpdated
+ }
+
+ log.Infof("Populated image_idv2 for %d deployments", updatedCount)
+ if upsertErrors.ErrorOrNil() != nil {
+ log.Errorf("Errors during migration: %v", upsertErrors.ErrorOrNil())
+ }
</code_context>
<issue_to_address>
**issue (bug_risk):** Protect against nil dereference when checking `upsertErrors` at the end of `migrate`.
`upsertErrors` starts as `var upsertErrors *multierror.Error` and is only set when `migrateBatch` appends an error. If no errors occur, it stays nil and `upsertErrors.ErrorOrNil()` will panic. Please guard this with a nil check, e.g.:
```go
if upsertErrors != nil {
if err := upsertErrors.ErrorOrNil(); err != nil {
log.Errorf("Errors during migration: %v", err)
}
}
```
</issue_to_address>
### Comment 2
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_test.go" line_range="55-56" />
<code_context>
+ makeContainer("sha256:"+strings.Repeat("b", 64), "registry.example.com/img2:v1@sha256:"+strings.Repeat("b", 64)),
+ )
+
+ // Deployment 2: container without image_id — should NOT be migrated.
+ dep2 := makeDeployment("no-image-id",
+ makeContainer("", "registry.example.com/img3:latest"),
+ )
</code_context>
<issue_to_address>
**suggestion (testing):** Add a case for containers with image IDs but missing/empty full image names to cover `newImageV2ID` edge conditions
Current tests cover containers needing migration, missing `image_id`, already-migrated containers, and mixed cases. What’s missing is a DB-backed deployment where `image_id` is set but `image_name_fullname` is empty or absent. Please add such a deployment and assert that both the column and the blob remain unchanged, so we verify we don’t start generating IDs for partially populated image records.
</issue_to_address>
### Comment 3
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_test.go" line_range="60-64" />
<code_context>
+ makeContainer("", "registry.example.com/img3:latest"),
+ )
+
+ // Deployment 3: container that already has image_idv2 — should NOT be migrated.
+ dep3FullName := "registry.example.com/img4:v1@sha256:" + strings.Repeat("c", 64)
+ dep3Digest := "sha256:" + strings.Repeat("c", 64)
+ existingIDV2 := newImageV2ID(dep3FullName, dep3Digest)
+ dep3 := makeDeployment("already-migrated",
+ &storage.Container{
+ Image: &storage.ContainerImage{
</code_context>
<issue_to_address>
**suggestion (testing):** Consider asserting that the serialized blob for already-migrated deployments preserves the existing IdV2 values
For `dep3` you already assert that `image_idv2` retains `existingIDV2`. Since the migration also rewrites the serialized blob, it’d be good to add a blob-level assertion (e.g., `verifyBlobIDV2(dep3)` that checks equality with `existingIDV2` here) to ensure the migration never overwrites previously-correct IdV2 values in the serialized deployment.
</issue_to_address>
### Comment 4
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_impl.go" line_range="27" />
<code_context>
+// deployments.serialized proto blob. Updating the blob is necessary because
+// the platform reprocessor reads the blob and upserts back — if the blob
+// doesn't have id_v2, the reprocessor clobbers the column value.
+func migrate(database *types.Databases) error {
+ pgutils.CreateTableFromModel(database.DBCtx, database.GormDB, schema.CreateTableDeploymentsStmt)
+ db := database.PostgresDB
</code_context>
<issue_to_address>
**issue (complexity):** Consider returning the per-batch non-fatal errors from `migrateBatch` instead of mutating a `**multierror.Error` to simplify error flow and ownership.
You can simplify the error aggregation without changing behavior by removing the `**multierror.Error` and letting `migrateBatch` return its non-fatal errors directly.
### Key change: return batch errors instead of mutating a double pointer
```go
// Before
func migrate(database *types.Databases) error {
// ...
var upsertErrors *multierror.Error
lastID := uuid.Nil.String()
for {
batchUpdated, newLastID, err := migrateBatch(ctx, db, lastID, &upsertErrors)
if err != nil {
return err
}
if newLastID == "" {
break
}
lastID = newLastID
updatedCount += batchUpdated
}
log.Infof("Populated image_idv2 for %d deployments", updatedCount)
if upsertErrors.ErrorOrNil() != nil {
log.Errorf("Errors during migration: %v", upsertErrors.ErrorOrNil())
}
return nil
}
```
Change `migrate` to accumulate per-batch non-fatal errors returned by `migrateBatch`:
```go
func migrate(database *types.Databases) error {
pgutils.CreateTableFromModel(database.DBCtx, database.GormDB, schema.CreateTableDeploymentsStmt)
db := database.PostgresDB
ctx := database.DBCtx
updatedCount := 0
var upsertErrors *multierror.Error
lastID := uuid.Nil.String()
for {
batchUpdated, newLastID, batchErrs, err := migrateBatch(ctx, db, lastID)
if err != nil {
return err
}
upsertErrors = multierror.Append(upsertErrors, batchErrs)
if newLastID == "" {
break
}
lastID = newLastID
updatedCount += batchUpdated
}
log.Infof("Populated image_idv2 for %d deployments", updatedCount)
if err := upsertErrors.ErrorOrNil(); err != nil {
log.Errorf("Errors during migration: %v", err)
}
return nil
}
```
And simplify `migrateBatch`’s signature and internal error handling:
```go
// Before
func migrateBatch(ctx context.Context, db postgres.DB, lastID string, upsertErrors **multierror.Error) (int, string, error) {
// ...
if err := rows.Scan(&d.id, &d.serialized); err != nil {
*upsertErrors = multierror.Append(*upsertErrors, fmt.Errorf("scanning deployment row: %w", err))
continue
}
// ...
if err := d.UnmarshalVT(dep.serialized); err != nil {
*upsertErrors = multierror.Append(*upsertErrors, fmt.Errorf("unmarshal deployment %s: %w", dep.id, err))
continue
}
// ...
}
```
Change to:
```go
func migrateBatch(ctx context.Context, db postgres.DB, lastID string) (int, string, *multierror.Error, error) {
var batchErrs *multierror.Error
rows, err := db.Query(ctx, `
SELECT d.id, d.serialized
FROM deployments d
WHERE d.id > $2::uuid AND EXISTS (
SELECT 1 FROM deployments_containers dc
WHERE dc.deployments_id = d.id
AND dc.image_id IS NOT NULL AND dc.image_id != ''
AND dc.image_name_fullname IS NOT NULL AND dc.image_name_fullname != ''
AND (dc.image_idv2 IS NULL OR dc.image_idv2 = '')
)
ORDER BY d.id
LIMIT $1`, batchSize, lastID)
if err != nil {
return 0, "", nil, fmt.Errorf("querying deployments: %w", err)
}
defer rows.Close()
var deps []depRow
for rows.Next() {
var d depRow
if err := rows.Scan(&d.id, &d.serialized); err != nil {
batchErrs = multierror.Append(batchErrs, fmt.Errorf("scanning deployment row: %w", err))
continue
}
deps = append(deps, d)
}
if err := rows.Err(); err != nil {
return 0, "", nil, fmt.Errorf("iterating deployment rows: %w", err)
}
if len(deps) == 0 {
return 0, "", batchErrs, nil
}
newLastID := deps[len(deps)-1].id
conn, err := db.Acquire(ctx)
if err != nil {
return 0, "", batchErrs, fmt.Errorf("acquiring connection: %w", err)
}
defer conn.Release()
batch := &pgx.Batch{}
batchDeploymentCount := 0
for _, dep := range deps {
d := &storage.Deployment{}
if err := d.UnmarshalVT(dep.serialized); err != nil {
batchErrs = multierror.Append(batchErrs, fmt.Errorf("unmarshal deployment %s: %w", dep.id, err))
continue
}
blobChanged := populateContainerImageIDV2s(d)
if blobChanged {
newSerialized, err := d.MarshalVT()
if err != nil {
batchErrs = multierror.Append(batchErrs, fmt.Errorf("marshal deployment %s: %w", dep.id, err))
continue
}
batch.Queue("UPDATE deployments SET serialized = $1 WHERE id = $2",
newSerialized, pgutils.NilOrUUID(dep.id))
}
for idx, container := range d.GetContainers() {
if idv2 := container.GetImage().GetIdV2(); idv2 != "" {
batch.Queue(
"UPDATE deployments_containers SET image_idv2 = $1 WHERE deployments_id = $2 AND idx = $3",
idv2, pgutils.NilOrUUID(dep.id), idx,
)
}
}
batchDeploymentCount++
}
if batch.Len() > 0 {
results := conn.SendBatch(ctx, batch)
for i := 0; i < batch.Len(); i++ {
if _, err := results.Exec(); err != nil {
batchErrs = multierror.Append(batchErrs, fmt.Errorf("batch exec: %w", err))
}
}
if err := results.Close(); err != nil {
return 0, "", batchErrs, fmt.Errorf("closing batch results: %w", err)
}
}
return batchDeploymentCount, newLastID, batchErrs, nil
}
```
This keeps:
- The same pagination and updated-count behavior.
- The same distinction between fatal errors (returned `error`) and non-fatal per-row/batch errors (aggregated via `multierror`).
- But removes the double pointer and makes error ownership and flow explicit at the call site.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| // Deployment 2: container without image_id — should NOT be migrated. | ||
| dep2 := makeDeployment("no-image-id", |
There was a problem hiding this comment.
suggestion (testing): Add a case for containers with image IDs but missing/empty full image names to cover newImageV2ID edge conditions
Current tests cover containers needing migration, missing image_id, already-migrated containers, and mixed cases. What’s missing is a DB-backed deployment where image_id is set but image_name_fullname is empty or absent. Please add such a deployment and assert that both the column and the blob remain unchanged, so we verify we don’t start generating IDs for partially populated image records.
| // Deployment 3: container that already has image_idv2 — should NOT be migrated. | ||
| dep3FullName := "registry.example.com/img4:v1@sha256:" + strings.Repeat("c", 64) | ||
| dep3Digest := "sha256:" + strings.Repeat("c", 64) | ||
| existingIDV2 := newImageV2ID(dep3FullName, dep3Digest) | ||
| dep3 := makeDeployment("already-migrated", |
There was a problem hiding this comment.
suggestion (testing): Consider asserting that the serialized blob for already-migrated deployments preserves the existing IdV2 values
For dep3 you already assert that image_idv2 retains existingIDV2. Since the migration also rewrites the serialized blob, it’d be good to add a blob-level assertion (e.g., verifyBlobIDV2(dep3) that checks equality with existingIDV2 here) to ensure the migration never overwrites previously-correct IdV2 values in the serialized deployment.
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- The
updatedCountand "Populated image_idv2 for %d deployments" log inmigrateare incremented per processed deployment, even when a deployment produces no DB updates due to marshal/unmarshal errors; consider tracking and logging a separate counter of successfully updated deployments (or containers) so the log reflects actual changes applied. - The
batchSizeglobal is mutated in tests to drive different batch behaviors; if these tests are ever run in parallel or other code starts depending on this package, it may be safer to pass batch size as a parameter (or configuration) tomigrate/migrateBatchinstead of relying on a mutable package-level variable.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `updatedCount` and "Populated image_idv2 for %d deployments" log in `migrate` are incremented per processed deployment, even when a deployment produces no DB updates due to marshal/unmarshal errors; consider tracking and logging a separate counter of successfully updated deployments (or containers) so the log reflects actual changes applied.
- The `batchSize` global is mutated in tests to drive different batch behaviors; if these tests are ever run in parallel or other code starts depending on this package, it may be safer to pass batch size as a parameter (or configuration) to `migrate`/`migrateBatch` instead of relying on a mutable package-level variable.
## Individual Comments
### Comment 1
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_impl.go" line_range="32-41" />
<code_context>
+ db := database.PostgresDB
+ ctx := database.DBCtx
+
+ updatedCount := 0
+ var upsertErrors *multierror.Error
+
+ // Paginate by deployment ID to guarantee forward progress even if some
+ // deployments consistently fail to update.
+ lastID := uuid.Nil.String()
+ for {
+ batchUpdated, newLastID, err := migrateBatch(ctx, db, lastID, &upsertErrors)
+ if err != nil {
+ return err
+ }
+ if newLastID == "" {
+ break
+ }
+ lastID = newLastID
+ updatedCount += batchUpdated
+ }
+
+ log.Infof("Populated image_idv2 for %d deployments", updatedCount)
+ if upsertErrors.ErrorOrNil() != nil {
+ log.Errorf("Errors during migration: %v", upsertErrors.ErrorOrNil())
+ }
</code_context>
<issue_to_address>
**issue (bug_risk):** Guard against nil `upsertErrors` before calling `ErrorOrNil` to avoid a panic when there were no errors.
In `migrate`, `upsertErrors` is declared as `var upsertErrors *multierror.Error` and only modified via `multierror.Append`. If no errors are appended, it remains `nil`, so calling `upsertErrors.ErrorOrNil()` will panic. Please either guard the call, e.g. `if upsertErrors != nil && upsertErrors.ErrorOrNil() != nil { ... }`, or normalize it once (e.g. `if err := multierror.Flatten(upsertErrors); err != nil { ... }`).
</issue_to_address>
### Comment 2
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_impl.go" line_range="159-160" />
<code_context>
+// and full name but no IdV2. Returns true if any container was updated.
+func populateContainerImageIDV2s(deployment *storage.Deployment) bool {
+ changed := false
+ for _, container := range deployment.GetContainers() {
+ img := container.GetImage()
+ if img.GetIdV2() != "" {
+ continue
</code_context>
<issue_to_address>
**issue (bug_risk):** Handle nil `Image` on containers to avoid a potential panic when assigning `IdV2`.
In `populateContainerImageIDV2s`, `container.GetImage()` can return nil. While `img.GetIdV2()` is safe on a nil receiver, `img.IdV2 = idV2` will panic if `img` is nil. Please add a nil check (e.g. `if img == nil { continue }`) before using `img` to avoid crashes when containers lack image data.
</issue_to_address>
### Comment 3
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_test.go" line_range="158-167" />
<code_context>
+}
+
+// verifyBlobIDV2 checks that the serialized blob has correct id_v2 on all containers.
+func (s *migrationTestSuite) verifyBlobIDV2(dep *storage.Deployment) {
+ var serialized []byte
+ err := s.db.QueryRow(s.ctx,
+ "SELECT serialized FROM deployments WHERE id = $1",
+ pgutils.NilOrUUID(dep.GetId())).Scan(&serialized)
+ s.Require().NoError(err)
+
+ updated := &storage.Deployment{}
+ s.Require().NoError(updated.UnmarshalVT(serialized))
+
+ for i, c := range updated.GetContainers() {
+ expected := expectedIDV2(c)
+ s.Equal(expected, c.GetImage().GetIdV2(),
</code_context>
<issue_to_address>
**issue (testing):** verifyBlobIDV2 computes expectations from the mutated blob, which could mask bugs that change image full name or digest
Since `expectedIDV2` reads the full name and digest from `updated`, this check will still pass if the migration incorrectly changes `Image.Id` or `Image.Name.FullName`—as long as `IdV2` is consistent with those mutated fields. To ensure the test catches such regressions, derive the expected IdV2 values from the original `dep` before unmarshalling and compare them to the corresponding containers in `updated` (e.g., by pairing containers by index). This way the test verifies that only `IdV2` changes during migration.
</issue_to_address>
### Comment 4
<location path="migrator/migrations/m_223_to_m_224_populate_deployment_containers_imageidv2/migration_impl.go" line_range="27" />
<code_context>
+// deployments.serialized proto blob. Updating the blob is necessary because
+// the platform reprocessor reads the blob and upserts back — if the blob
+// doesn't have id_v2, the reprocessor clobbers the column value.
+func migrate(database *types.Databases) error {
+ pgutils.CreateTableFromModel(database.DBCtx, database.GormDB, schema.CreateTableDeploymentsStmt)
+ db := database.PostgresDB
</code_context>
<issue_to_address>
**issue (complexity):** Consider simplifying error handling in `migrate`/`migrateBatch` by returning a `*multierror.Error` and centralizing non-fatal error accumulation to reduce indirection and clarify responsibilities.
You can simplify this without changing behavior by (1) removing the pointer-to-pointer error accumulator and (2) slightly separating concerns inside `migrateBatch`.
### 1. Remove `**multierror.Error`
Instead of passing `**multierror.Error` into `migrateBatch`, let `migrateBatch` return a `*multierror.Error` and aggregate it in `migrate`. This keeps ownership clearly in `migrate` and removes the double indirection.
```go
// migrate
func migrate(database *types.Databases) error {
pgutils.CreateTableFromModel(database.DBCtx, database.GormDB, schema.CreateTableDeploymentsStmt)
db := database.PostgresDB
ctx := database.DBCtx
updatedCount := 0
var upsertErrors *multierror.Error
lastID := uuid.Nil.String()
for {
batchUpdated, newLastID, batchErrs, err := migrateBatch(ctx, db, lastID)
if err != nil {
return err
}
upsertErrors = multierror.Append(upsertErrors, batchErrs)
if newLastID == "" {
break
}
lastID = newLastID
updatedCount += batchUpdated
}
log.Infof("Populated image_idv2 for %d deployments", updatedCount)
if err := upsertErrors.ErrorOrNil(); err != nil {
log.Errorf("Errors during migration: %v", err)
}
return nil
}
```
```go
// migrateBatch
func migrateBatch(ctx context.Context, db postgres.DB, lastID string) (int, string, *multierror.Error, error) {
var upsertErrors *multierror.Error
rows, err := db.Query(ctx, `...`, batchSize, lastID)
if err != nil {
return 0, "", nil, fmt.Errorf("querying deployments: %w", err)
}
defer rows.Close()
var deps []depRow
for rows.Next() {
var d depRow
if err := rows.Scan(&d.id, &d.serialized); err != nil {
upsertErrors = multierror.Append(upsertErrors, fmt.Errorf("scanning deployment row: %w", err))
continue
}
deps = append(deps, d)
}
if err := rows.Err(); err != nil {
return 0, "", nil, fmt.Errorf("iterating deployment rows: %w", err)
}
if len(deps) == 0 {
return 0, "", upsertErrors, nil
}
newLastID := deps[len(deps)-1].id
conn, err := db.Acquire(ctx)
if err != nil {
return 0, "", upsertErrors, fmt.Errorf("acquiring connection: %w", err)
}
defer conn.Release()
batch := &pgx.Batch{}
batchDeploymentCount := 0
for _, dep := range deps {
d := &storage.Deployment{}
if err := d.UnmarshalVT(dep.serialized); err != nil {
upsertErrors = multierror.Append(upsertErrors, fmt.Errorf("unmarshal deployment %s: %w", dep.id, err))
continue
}
blobChanged := populateContainerImageIDV2s(d)
if blobChanged {
newSerialized, err := d.MarshalVT()
if err != nil {
upsertErrors = multierror.Append(upsertErrors, fmt.Errorf("marshal deployment %s: %w", dep.id, err))
continue
}
batch.Queue("UPDATE deployments SET serialized = $1 WHERE id = $2",
newSerialized, pgutils.NilOrUUID(dep.id))
}
for idx, container := range d.GetContainers() {
if idv2 := container.GetImage().GetIdV2(); idv2 != "" {
batch.Queue(
"UPDATE deployments_containers SET image_idv2 = $1 WHERE deployments_id = $2 AND idx = $3",
idv2, pgutils.NilOrUUID(dep.id), idx,
)
}
}
batchDeploymentCount++
}
if batch.Len() > 0 {
results := conn.SendBatch(ctx, batch)
for i := 0; i < batch.Len(); i++ {
if _, err := results.Exec(); err != nil {
upsertErrors = multierror.Append(upsertErrors, fmt.Errorf("batch exec: %w", err))
}
}
if err := results.Close(); err != nil {
return 0, "", upsertErrors, fmt.Errorf("closing batch results: %w", err)
}
}
return batchDeploymentCount, newLastID, upsertErrors, nil
}
```
### 2. Centralize non-fatal error recording
To make the “log-and-continue” intent explicit and avoid repetition, add a small helper:
```go
func appendNonFatal(acc *multierror.Error, msg string, err error) *multierror.Error {
if err == nil {
return acc
}
return multierror.Append(acc, fmt.Errorf("%s: %w", msg, err))
}
```
Usage inside `migrateBatch`:
```go
if err := rows.Scan(&d.id, &d.serialized); err != nil {
upsertErrors = appendNonFatal(upsertErrors, "scanning deployment row", err)
continue
}
if err := d.UnmarshalVT(dep.serialized); err != nil {
upsertErrors = appendNonFatal(upsertErrors, fmt.Sprintf("unmarshal deployment %s", dep.id), err)
continue
}
newSerialized, err := d.MarshalVT()
if err != nil {
upsertErrors = appendNonFatal(upsertErrors, fmt.Sprintf("marshal deployment %s", dep.id), err)
continue
}
```
This preserves all existing behavior (same non-fatal vs fatal boundaries and `multierror` semantics) while reducing indirection and clarifying responsibilities.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
/retest |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🚀 Build Images ReadyImages are ready for commit 65424ca. To use with deploy scripts: export MAIN_IMAGE_TAG=4.11.x-709-g65424ca295 |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## ROX-34137/fix-tagless-images-names-under-cve-and-deployment-single-pages #20109 +/- ##
============================================================================================================
- Coverage 49.70% 49.39% -0.31%
============================================================================================================
Files 2766 2768 +2
Lines 209356 209471 +115
============================================================================================================
- Hits 104055 103466 -589
- Misses 97611 98446 +835
+ Partials 7690 7559 -131
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
|
||
| // Test GetContainerImageViews | ||
| responses, err := deploymentDS.GetContainerImageViews(ctx, pkgSearch.EmptyQuery()) | ||
| q1 := pkgSearch.NewQueryBuilder().AddRegexes(pkgSearch.ImageID, ".+").ProtoQuery() |
There was a problem hiding this comment.
OK, I'm curious, why did you have to do this?
There was a problem hiding this comment.
That is the exact query used by reprocessor here : https://github.com/stackrox/stackrox/blob/master/central/reprocessor/reprocessor.go#L68
And at some point I wanted to test what the SQL looks like. The expected values did not change because the test data has no deployments with an empty ImageID.
I can roll that back to how it was before though. It definitely is something I forgot to revert.
There was a problem hiding this comment.
wow. match non-empty because a long time ago we decided to make non existent string values to be "" in columns instead of allowing them to be NULL.
I commented because it looked odd. Do what you think is best for the test.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- In
populateContainerImageIDV2s(and the related helper in the tests), you callimg.GetName().GetFullName()without checking ifGetName()returns nil; consider adding a nil check (or using a safer accessor) to avoid panics on deployments with images missing a name. - The migration accumulates
upsertErrorswithmultierrorbut always returns nil frommigrate; if partial failures should be surfaced (or at least fail CI), consider returningupsertErrors.ErrorOrNil()instead of only logging, or explicitly document why partial failure is acceptable.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `populateContainerImageIDV2s` (and the related helper in the tests), you call `img.GetName().GetFullName()` without checking if `GetName()` returns nil; consider adding a nil check (or using a safer accessor) to avoid panics on deployments with images missing a name.
- The migration accumulates `upsertErrors` with `multierror` but always returns nil from `migrate`; if partial failures should be surfaced (or at least fail CI), consider returning `upsertErrors.ErrorOrNil()` instead of only logging, or explicitly document why partial failure is acceptable.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| // migrateBatch processes one page of deployments starting after lastID. | ||
| // Returns the number of deployments updated, the last deployment ID processed | ||
| // (empty if no rows found), and any fatal error. | ||
| func migrateBatch(ctx context.Context, db postgres.DB, lastID string, upsertErrors **multierror.Error) (int, string, error) { |
There was a problem hiding this comment.
You should probably break this thing up. There is a outer transaction, but you may want to consider using transactions in the migration as well if some of these updates need to be grouped.
Description
Enable the
ROX_FLATTEN_IMAGE_DATAfeature flag by default and for CI.A new migration (
m_223_to_m_224) populates theImageIDV2column indeployment containers. It reads every deployment row, computes a deterministic
UUIDv5 for each container image reference, writes the result back, and updates
the serialized blob so the value is available immediately after upgrade without
waiting for reprocessing.
The image V1 datastore singleton continues to return a non-nil instance even
when
FlattenImageDatais enabled because the V1 datastore is still needed tomigrate images from the
imagestable to theimages_v2table duringreprocessing.
Other changes to JSON fixture and test files update expected values to make
tests compatible with the V2 image model.
User-facing documentation
Testing and quality
Automated testing
How I validated my change