Summary
The current implementation cycles hatch patterns per patch, which breaks grouped bar semantics because each dataset produces multiple patches. This causes hatch patterns to repeat incorrectly within a dataset.
Proposed fix
Replace the patch-level hatch cycling with dataset-level normalization.
Specifically:
Determine the number of datasets (num_datasets = len(heights)),
Normalize hatch to one pattern per dataset,
Apply that hatch consistently to all patches belonging to the same dataset.
This aligns hatch behavior with existing color/label broadcasting in grouped_bar and resolves the test failures caused by per-patch cycling.
# Normalize hatches: one hatch per dataset (no per-patch cycling)
num_datasets = len(heights)
user_hatch = kwargs.get("hatch", None)
if user_hatch is None:
# No hatch specified → all datasets get None
hatches = [None] * num_datasets
else:
# Ensure list-like
if isinstance(user_hatch, str):
raise TypeError("hatch must be a list-like of strings, not a single string")
try:
hatch_list = list(user_hatch)
except TypeError:
raise TypeError("hatch must be list-like")
if len(hatch_list) < num_datasets:
raise ValueError(
f"Expected at least {num_datasets} hatch patterns for {num_datasets} datasets"
)
# Use exactly one hatch per dataset
hatches = hatch_list[:num_datasets]
# Iterator used by the bar-creation loop
hatches = iter(hatches)
Summary
The current implementation cycles hatch patterns per patch, which breaks grouped bar semantics because each dataset produces multiple patches. This causes hatch patterns to repeat incorrectly within a dataset.
Proposed fix
Replace the patch-level hatch cycling with dataset-level normalization.
Specifically:
Determine the number of datasets (num_datasets = len(heights)),
Normalize hatch to one pattern per dataset,
Apply that hatch consistently to all patches belonging to the same dataset.
This aligns hatch behavior with existing color/label broadcasting in grouped_bar and resolves the test failures caused by per-patch cycling.