0

I have the following table-valued function:

[Microsoft.SqlServer.Server.SqlFunction(SystemDataAccess = SystemDataAccessKind.Read,
                          FillRowMethodName ="XXX",  TableDefinition = "time DateTime2"]  

public static IEnumerable ValueAtTime(string somestring, DateTime inputTime)
{
...
}

As you can see in the table definition I can define DateTime2 as a column datatype. I would also like to do the same for the input parameter "InputTime", but since this parameter is a .NET type, I am unable to. Is there a way to do that in order to get DateTime2 in the SQL definition as an input parameter?

Solomon Rutzky
  • 46,688
  • 9
  • 128
  • 171
Pacman
  • 2,183
  • 6
  • 39
  • 70

2 Answers2

1

Actually, the .NET DateTime type is the correct one to use to pass back and forth DATETIME2 values. And, since it is a .NET type and not one of the SqlTypes types (which all have a .IsNull property), if you need to allow for NULLs, then use DateTime?.

To help clarify, it sounds like the assumption is that SQL Server's DATETIME datatype has an equivalent DateTime type in .NET, so then SQL Server's DATETIME2 datatype should likewise have a .NET equivalent, especially if DATETIME2 has greater range and precision. Don't be confused by the similarity in name of DATETIME and DateTime between the two environments. SQL Server's DATETIME has less range and precise than .NET's DateTime, whereas DATETIME2 has the same range and precision:

Type         Environment    Min Date      Max Date      Precision
----         -----------    ----------    ----------    ---------
DATETIME     SQL Server     1753-01-01    9999-12-31    .000, .003, and .007 seconds
DATETIME2    SQL Server     0001-01-01    9999-12-31    .0000001 seconds
DateTime     .NET           0001-01-01    9999-12-31    .0000001 seconds

For more info, please see:


Also:

  • I would recommend against naming the output field "time", though, since it is a SQL Server reserved word in addition to being a little misleading (since there is also a TIME datatype in SQL Server).
  • You should be using SqlString instead of string for the other input parameter.
Solomon Rutzky
  • 46,688
  • 9
  • 128
  • 171
0

Why not? Below is an excerpt from working code. Works just fine with SQL Server 2008.

[SqlFunction(
    FillRowMethodName = "FillRow",
    DataAccess = DataAccessKind.None,
    IsDeterministic = true,
    IsPrecise = true)]
public static IEnumerable ParsePacket(
    byte[] packet)
{ ... }

public static void FillRow(
    object rowdata,
    out DateTime? time,
    out short peer,
    out int addr,
    out byte quality,
    out bool? state,
    out float? value)
{ ... }

and

CREATE FUNCTION [split_packet] (@data [varbinary](300))
RETURNS  TABLE (
    [time] [datetimems] NULL,
    [peer] [smallint] NULL,
    [n] [int] NULL,
    [quality] [tinyint] NULL,
    [state] [bit] NULL,
    [value] [real] NULL
)
AS 
EXTERNAL NAME [Iec104Parser].[UDF].[ParsePacket]

Not so fine with 2005... :-(

While this is output, I don't see why there could be problems with input (microsoft does not seem to differentiate on that).

Eugene Ryabtsev
  • 2,232
  • 1
  • 23
  • 37
  • 1
    A few notes about the SQLCLR code: **1)** the `IsDeterministic` and `IsPrecise` properties do not apply to TVFs, only to scalar UDFs, **2)** for the input parameter, you should use `SqlBinary` instead of `byte[]`, and **3)** in the `FillRow` method, the `out` parameters should also be `Sql****` types and not .NET types (except for possibly the `[time]` field if you meant `[datetime2]` instead of `[datetimems]`; or is `datetimems` a User-Defined Data Type?). – Solomon Rutzky Jul 10 '15 at 05:39
  • @srutzky [datetimems] is defined as either [datetime2] cut down to ms or just [datetime], depending on SQL Server version. – Eugene Ryabtsev Jul 10 '15 at 10:33
  • @srutzky It seems SqlBinary has byte[] Value. What is the benefit of having the whole of SqlBinary as an input parameter if byte[] Value is all I need? Also, what is the benefit of using Sql**** for other types, especially for ints, floats etc? – Eugene Ryabtsev Jul 10 '15 at 10:34
  • Hi. In SQL Server 2005 (prior to the existence of nullable types), it was the only way to pass `NULL`s through. Since then, it at least provides a consistent test for `NULL`, rather than needing to use either `Type?` or test for `DbNull.Value`. And yes, `SqlBinary` does return `byte[]` for the `.Value` property as that is the intended .NET type. But the SQLCLR API is written to use these types. Look at the MSDN documentation for each as there are extension methods for each type. `SqlString` has several benefits over `string`. `SqlDecimal` is more precise than `Decimal`, etc. (_continuted_) – Solomon Rutzky Jul 10 '15 at 13:55
  • Each of the `Sql*****` types is a wrapper for the SQL Server value, plus some additional stuff. Sure, some of the additional stuff is more important for some types than others, and for some not really important at all, but again, it is a consistent interface. If passing in really large `NVARCHAR(MAX)` / `VARBINARY(MAX)` values, then `SqlChars` / `SqlBytes`, respectively, provide a means of streaming that is not otherwise possible. It has limited application, but way more efficient when it can be used. Please see the article that I linked to (and wrote) in my answer as it has more details. – Solomon Rutzky Jul 10 '15 at 14:14
  • Also, just curious as to what you mean by "[datetimems] is defined as either [datetime2] cut down to ms or just [datetime]". Cut down how? By using `DATETIME2(3)`? If so, what is the benefit of that over just using `DATETIME`? It seems that `DATETIME2(3)` is 1 byte smaller, and recommended to use for new development. But, if the data needs to be transferable between systems, then `DATETIME2(3)` can still produce values that can't exist fully in `DATETIME`. Such as any millisecond ending in something other than 0, 3, or 7. Again, was just curious about this. – Solomon Rutzky Jul 10 '15 at 16:18
  • 1
    @srutzky Thanks for your extensive comments. Yes, `DATETIME2(3)`. That is because the data in the packets contain time with millisecond precision and we prefer not to lose that precision if we don't have to. – Eugene Ryabtsev Jul 11 '15 at 12:16