As others have noted, the C# specification does not formally define "type". The C# spec does not attempt to be either a formal mathematical description of the language semantics or a tutorial for beginner programmers; you are expected to know what words like "type" and "value" and so on mean before you start reading the specification.
There are many possible definitions of "type", at varying levels of precision. For example, the ECMAScript specification somewhat informally defines a type as "a set of values", but since ECMAScript only has nine possible types, it does not need to have a strict, well-founded definition.
Another answer says that a type consists of a set of values, a set of rules for operating on those values, and a name. This is a very common working definition of a type, but it runs into problems when you try to think about it more formally. What is the name of an anonymous type? Is double*[][]
the name of the type "jagged two dimensional array of pointers to double"? Does that type even have a name? Are List<int>
and List<System.Int32>
two different names for the same type? Does any set of values form a type? Are types themselves values? What is the type of a type? And so on. It's a good working definition but it doesn't quite hold up under scrutiny.
As a compiler writer, the way I think about types in C# is as follows: a type is a classification that can be applied to an expression. An expression is classified as being of a particular type if a proof exists that shows how the expression may be legally classified as that type, according to the rules of C#.
For example, suppose we are attempting to work out the type of the expression "1 + 2.3". We begin by working out the type of the expression "1". The rules of C# give us that; an expression of that form is always classified as an int. We work out the type of the expression "2.3". Again, the rules of C# tell us that an expression of this form is classified as "double". What is the type of the whole expression? The rules of C# tell us that the sum of an "int" and a "double" is classified as a "double". So the type of this expression is "double".
That's what the compiler does when it performs type analysis: it constructs proofs that particular expressions can legally be classified in particular ways, or, if the program is erroneous, it tells you why it was unable to construct a proof.
But all a type is, at this level, is simply a classification. You can do the same thing with any domain. You say that in the domain of positive integers, certain numbers are classified as "odd" and certain numbers are classified as "even". Certain numbers are classified as "prime" and "composite". If you want to classify a number, say, "123", then you might write a proof that shows that "123" is classified as both "odd" and "composite".
You can make up any classification you want, and you know what you just did? You just made a type. You can classify numbers into "the sum of two primes" and "not the sum of two primes", and "greater than four" and "not greater than four". And then you can combine them together into types like "even integers that are greater than four and not the sum of two odd primes". It is easy to determine if any particular integer is a member of this type; so far all integers that we've tried have been determined to not be members of that type. It is at this time unknown whether that type has any members or not; just because you can come up with a type does not mean that you know the size of the type!
A type system can allow any possible classification scheme. We could write C# so that "odd" and "even" and "prime" and "composite" were subtypes of "int". We could write C# so that any property of integers that you can write down is a subtype of int! We do not do so because such type systems put an enormous burden upon the compiler; compilers that work with such type systems are very complicated, very slow, and can get into situations where they have to solve impossible problems. The designers of the CLR and C# built the type system that we have such that the compiler can (usually) be extremely fast in classifying expressions into types.