I'm trying to write a Python function to format a Foundation.Decimal
, for use as a type summarizer. I posted it in this answer. I'll also include it at the bottom of this answer, with extra debug prints.
I've now discovered a bug, but I don't know if the bug is in my function, or in lldb, or possibly in the Swift compiler.
Here's a transcript that demonstrates the bug. I load my type summarizer in ~/.lldbinit
, so the Swift REPL uses it.
:; xcrun swift
registering Decimal type summaries
Welcome to Apple Swift version 4.2 (swiftlang-1000.11.37.1 clang-1000.11.45.1). Type :help for assistance.
1> import Foundation
2> let dec: Decimal = 7
dec: Decimal = 7
Above, the 7
in the debugger output is from my type summarizer and is correct.
3> var dict = [String: Decimal]()
dict: [String : Decimal] = 0 key/value pairs
4> dict["x"] = dec
5> dict["x"]
$R0: Decimal? = 7
Above, the 7
is again from my type summarizer, and is correct.
6> dict
$R1: [String : Decimal] = 1 key/value pair {
[0] = {
key = "x"
value = 0
}
}
Above, the 0
(in value = 0
) is from my type summarizer, and is incorrect. It should be 7
.
So why is it zero? My Python function is given an SBValue
. It calls GetData()
on the SBValue
to get an SBData
. I added debug prints to the function to print the bytes in the SBData
, and also to print the result of sbValue.GetLoadAddress()
. Here's the transcript with these debug prints:
:; xcrun swift
registering Decimal type summaries
Welcome to Apple Swift version 4.2 (swiftlang-1000.11.37.1 clang-1000.11.45.1). Type :help for assistance.
1> import Foundation
2> let dec: Decimal = 7
dec: Decimal = loadAddress: ffffffffffffffff
data: 00 21 00 00 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
7
Above, we can see that the load address is bogus, but the bytes of the SBData
are correct (byte 1, 21
, contains the length and flags; byte 4, '07', is the first byte of the significand).
3> var dict = [String: Decimal]()
dict: [String : Decimal] = 0 key/value pairs
4> dict["x"] = dec
5> dict
$R0: [String : Decimal] = 1 key/value pair {
[0] = {
key = "x"
value = loadAddress: ffffffffffffffff
data: 00 00 00 00 00 21 00 00 07 00 00 00 00 00 00 00 00 00 00 00
0
}
}
Above, we can see that the load address is still bogus, and now the bytes of the SBData
are incorrect. The SBData
still contains 20 bytes (the correct number for a Foundation.Decimal
, aka NSDecimal
), but now four 00
bytes have been inserted at the front and the last four bytes have been dropped.
So here are my specific questions:
Am I using the lldb API incorrectly, and thus getting wrong answers? If so, what am I doing wrong and how should I correct it?
If I'm using the lldb API correctly, then is this a bug in lldb, or is the Swift compiler emitting incorrect metadata? How can I figure out which tool has the bug? (Because if it's a bug in one of the tools, I'd like to file a bug report.)
If it's a bug in lldb or Swift, how can I work around the problem so I can format a
Decimal
correctly when it's part of aDictionary
?
Here is my type formatter, with debug prints:
# Decimal / NSDecimal support for lldb
#
# Put this file somewhere, e.g. ~/.../lldb/Decimal.py
# Then add this line to ~/.lldbinit:
# command script import ~/.../lldb/Decimal.py
import lldb
def stringForDecimal(sbValue, internal_dict):
from decimal import Decimal, getcontext
print(' loadAddress: %x' % sbValue.GetLoadAddress())
sbData = sbValue.GetData()
if not sbData.IsValid():
raise Exception('unable to get data: ' + sbError.GetCString())
if sbData.GetByteSize() != 20:
raise Exception('expected data to be 20 bytes but found ' + repr(sbData.GetByteSize()))
sbError = lldb.SBError()
exponent = sbData.GetSignedInt8(sbError, 0)
if sbError.Fail():
raise Exception('unable to read exponent byte: ' + sbError.GetCString())
flags = sbData.GetUnsignedInt8(sbError, 1)
if sbError.Fail():
raise Exception('unable to read flags byte: ' + sbError.GetCString())
length = flags & 0xf
isNegative = (flags & 0x10) != 0
debugString = ''
for i in range(20):
debugString += ' %02x' % sbData.GetUnsignedInt8(sbError, i)
print(' data:' + debugString)
if length == 0 and isNegative:
return 'NaN'
if length == 0:
return '0'
getcontext().prec = 200
value = Decimal(0)
scale = Decimal(1)
for i in range(length):
digit = sbData.GetUnsignedInt16(sbError, 4 + 2 * i)
if sbError.Fail():
raise Exception('unable to read memory: ' + sbError.GetCString())
value += scale * Decimal(digit)
scale *= 65536
value = value.scaleb(exponent)
if isNegative:
value = -value
return str(value)
def __lldb_init_module(debugger, internal_dict):
print('registering Decimal type summaries')
debugger.HandleCommand('type summary add Foundation.Decimal -F "' + __name__ + '.stringForDecimal"')
debugger.HandleCommand('type summary add NSDecimal -F "' + __name__ + '.stringForDecimal"')