Would need more details to better answer you. What language are you using? What tempo range are you targeting? What audio interface will you be programming against (important to know when it comes to the latency you will be dealing with).
Also, is it just a drum sequencer? Or is it something more complex? How many "instruments"/"voices" are you planning on supporting? If you are going to be supporting <32 voices, it's possible to have a single array of [int(tick), int(voices)]. Each individual voice is a bit in the 32-bit number. Then to determine whether a voice is playing, you just need to "&" the voice flag against the voices int in the array. This would avoid the array sorting/copying/building.
Latency is an important issue to understand here. If you have a tempo of 240bpm for instance, and four "ticks" per beat (really, we're talking about one measure with each beat subdivided into sixteenth notes):
- There are 4 beats per second (240 beats per minute / 60 seconds)
-> Each beat occurs every 250 milliseconds
-> Each "tick" occurs every 62.5 milliseconds
If the audio interface has a high latency (for example, shared mode WASAPI in Windows Vista+ has a latency of around 30ms), you will have different "windows" that will need to be generated.
If you're processing MIDI events, this becomes even more important because you can be receiving MIDI events within individiual ticks.
Most DAWs (Digital Audio Workstations) I have worked with typically think of the world in two different "types": audio data, and midi data. Audio data tends to be more "realtime" (or as realtime as you can get, hence the importance for sub-3ms latencies). Midi is still fairly "fast paced". Eventually, you'll likely be thinking in terms of midi data.
However, the best way to get started on a project like this is to build a very simple drum sequencer. Take four drums, and the stuff you are doing, and go from there :). Good luck!