Before your start changing anything, run Instruments and determine where your bottlenecks are. It's very easy to chase the wrong things.
I'm very suspicious of that 60s number. That's a huge amount of time and suggests that you're actually doing this filtering repeatedly. I'm betting you do it once per visible row or something like that. That would explain why it's so much faster the second time. 300k is a lot, but it really isn't that much. Computers are very fast, and a minute is a very long time.
That said, there are some obvious problems with your existing filter. It recomputes prefix.lowercased()
300k times, which is unnecessary. you can pull that out:
let lowerPrefix = prefix.lowercased()
filteredBooks = books.filter { $0.title.lowercased().hasPrefix(lowerPrefix) }
Similarly, you're recomputing all of title.lowercased()
for every search, and you almost never need all of it. You might cache the lowercased versions, but you also might just lowercase what you need:
let lowerPrefix = prefix.lowercased()
let prefixCount = prefix.count // This probably isn't actually worth caching
filteredBooks = books.filter { $0.title.prefix(prefixCount).lowercased() == lowerPrefix }
I doubt you'll get a lot of benefit this way, but it's the kind of thing to explore before exploring novel data structures.
That said, if the only kind of search you need is a prefix search, the Trie is definitely designed precisely for that problem. And yeah, binary searching is also worth considering if you can keep your list in title order, and prefix searching is the only thing you care about.
While it won't help your first search, keep in mind that your second search can often be much faster by caching recent searches. In particular, if you've searched "a" already, then you know that "ap" will be a subset of that, so you should use that fact. Similarly, it is very common for these kinds of searches to repeat themselves when users make typos and backspace. So saving some recent results can be a big win, at the cost of memory.
At these scales, memory allocations and copying can be a problem. Your Book type is on the order of 56 bytes:
MemoryLayout.stride(ofValue: Book()) // 56
(The size is the same, but stride is a bit more meaningful when you think about putting them in an array; it includes any padding between elements. In this case the padding is 0. But if you added a Bool property, you'd see the difference.)
The contents of strings don't have to be copied (if there's no mutation), so it doesn't really matter how long the strings are. But the metadata does have to be copied, and that adds up.
So a full copy of this array is on the order of 16MB of "must copy" data. The largest subset you would expect would be 10-15% (10% of words start with the most common letter, s, in English, but titles might skew this some). That's still on the order of a megabyte of copying per filter.
You can improve this by working exclusively in indices rather than full elements. There unfortunately aren't great tools for that in stdlib, but they're not that hard to write.
extension Collection {
func indices(where predicate: (Element) -> Bool) -> [Index] {
indices.filter { predicate(self[$0]) }
}
}
Instead of copying 56 bytes, this copies 8 bytes per result which could significantly reduce your memory churn.
You could also implement this as an IndexSet; I'm not certain which would be faster to work with.