I am currently working on an app that has a lot of legacy data from the backend. This means types are very unreliable. I can get a boolean like these examples "canJump": "1"
"canJump": 0
"canJump": "true"
"canJump": false
. So I have been pondering about good ways to handle decoding this. I would like to avoid:
if let bool = try? container.decode(Bool.self, forKey: .jump) {
self.canJump = bool
}
if let boolAsString = try? container.decode(String.self, forKey: .jump) {
if boolAsString == "true" || boolAsString == "false" {
self.canJump = Bool(boolAsString)
}
}
etc. in every single custom decoder. So I tried an extension:
extension KeyedDecodingContainer {
func decodeWithUnknownTypes(_ type: Bool.Type, forKey key: KeyedDecodingContainer<K>.Key) throws -> Bool? {
do {
return try decode(Bool.self, forKey: key)
} catch DecodingError.typeMismatch {}
do {
let string = try decode(String.self, forKey: key)
return Bool(string.lowercased())
} catch DecodingError.typeMismatch {}
do {
let int = try decode(Int.self, forKey: key)
if int == 1 {
return true
}
if int == 0 {
return false
}
}
return nil
}
}
(not entirely finished). The above almost does the trick, but it still leaves me having to implement custom decoders in almost all my backend calls. I would love it if someone has an idea of how I could write it, so I can simply use a Decodable struct, and it would automagically accept 1 as a Bool. Can I maybe override the decode
method of KeyedDecodingContainer and still get the original function as a "first try"?