Assuming that you mean for A
and B
to be values, you can just use itertools.groupby
if it's the case that your grouping logic is to place contiguous sequences of each value into different groups.
Concretely (including fixing a bracket and comma error in your example code, and adding some dummy values for A
and B
):
A = 1.0
B = 2.0
Foo = np.array([[0,A],[1,A],[2,A],[3,B],[4,B],
[5,A],[6,A],[7,B],[8,B],[9,B],[10,A]])
from itertools import groupby
groups = [np.array(list(v)) for k,v in groupby(Foo, lambda x: x[1])]
Now what you call bar
will be groups[0]
, and so on. If you want to give them names automatically, it's advisable not to try to do this at the top level with some kind of locals()
or globals()
trickery, but instead just list out the names and use a dict
:
names = ['bar', 'baz', 'qux', 'arr', 'wiz']
named_groups = {names[i]:groups[i] for i in range(len(groups))}
Now named_groups['bar']
returns what you used to just call bar
.
Alternatively, if you can guarantee the precise number of groups, you can use tuple
unpacking to name them all in one step like this:
(bar,
baz,
qux,
arr,
wiz) = [np.array(list(v)) for k,v in itertools.groupby(Foo, lambda x: x[1])]
(Note: I've never gotten a great answer about what PEP 8 might say about best practice for needing to have a lot of (possibly verbosely named) tuple elements to unpack on the left side of =
)
This still lets you have the groups bound to top-level variable names, but rightfully forces you to be explicit about how many such variables there are, avoiding the bad practice of trying to dynamically assign variables on the fly.