How do you approach designing and implementing complex UI interaction animations?
(I'm not talking about specific languages and libraries like jQuery or UIKit, unless they force you into specific way of thinking about managing interdependent animations, which I'm interested in.)
Consider a deceptively “simple” task like designing and programming iOS home screen.
However, the magnitude of hidden complexity is astounding.
Just a few things I noticed about the interface:
- When you barely touch an icon, its opacity changes but the size change is delayed.
- If you drag an app between two other apps, there is a noticable delay before all apps rearrange to move the free space. So if you just keep moving an app across the screen, nothing happens until you settle.
- Rearrangement happens line-by-line, first goes the line you hovered over, and it triggers the next line in chain, to the line where the free space previously was.
- If you drop an app, it will drop at the now-free space, and not just where you dropped it.
- If you hover an app over another app, radial light will appear, blink twice, and only then a group will be created.
- If the group was created right to the free space, and then discarded, it will animate left to occupy the free space while discarding.
I'm sure there is even more complexity here that I failed to notice.
Continuous Animations vs Discrete Actions
In rough generalization, for each pair of (animation, user_action)
in the same interface context you need to decide what if user_action
happens while animation
is already running.
In most cases, you can
- Cancel the animation;
- Change the animation on the go;
- Ignore the action;
- Queue the action to when the animation finishes.
But then there may be several actions during the animation, and you have to decide which actions to discard, which to queue, and whether to execute all queued actions, or just the last one, when the animation is over.
If something is queued when animation finishes, and the animation is changed, you need to decide if queued actions still make sense, or need to be removed.
If this sounds too theoretical, consider a real-world example: how do you deal with the user dragging an app downwards, waiting for rearrangement to begin, then immediately dragging the app back upwards and releasing it? How do you ensure the animation is smooth and believable in every possible case?
Right Tools for the Job
I find myself unable to keep even half of the possible scenarios in the head. As expressiveness of the UI increases, the number of possible states begins to violently violate the 7±2 rule.
My question, therefore, is as follows:
How do you tame the complexity in designing and implementing animations?
I'm interested both in finding effective ways of thinking about the problem, as well as means to solve it.
As an example, events and observers proved to be a very effective abstraction for most UIs.
But can you design and implement an iOS-like drag-n-drop screen relying on events as the main abstraction?
How tangled does the code have to be to accurately represent all possible states of the UI? Would it be an event handler adding another event handler when some boolean variable is true to the function that sets it to false, unless yet another event handler ran before it?
“Have you never heard of classes?” you may wonder. Why, I have, but there is just too much state that these classes will want to share.
To sum up, I'm looking for language-agnostic (although probably language-or-framework-inspired) techniques for managing complex interdependent, cancelable, animations happening in sequence or at once, and describing how they react to user actions.
(All of this considering I don't have to program animations itself—i.e., I do have access to a framework like jQuery or Core Animation that can animate(styles, callback)
the thing for me, and I can cancel
it.)
Data structures, design patterns, DSLs are all good if they help to the problem.