-1

So I was looking up C# Caesar ciphers online and I found this website: https://www.programmingalgorithms.com/algorithm/caesar-cipher

I looked through and generally the code made sense to me until this part:

char offset = char.IsUpper(ch) ? 'A' : 'a';
return (char)((((ch + key) - offset) % 26) + offset);

I understand the ternary operator, it's mainly the second line that I can't make sense of, it returns a character,but somehow adds a character and a number together, subtracts a character, gets the modulus and then adds on a character?

The only explanation I've come up with is that each character has an ID and it's doing the operations on that rather than the character itself? Honestly it's a bit beyond me, if someone could explain it that would be great.

Thanks in advance.

Tom Blodget
  • 20,260
  • 3
  • 39
  • 72
ParadAUX
  • 53
  • 1
  • 8

2 Answers2

2

Say you have a key pressed, say it was F, the ASCII code will be 0x46 thus:

int ch = 0x46;

then the value is shifted by the key parameter (let's take 3)

int key = 21;

the offset is just the offset between the numerical vaue and the ASCII code:

'A' - 'A' = 0 -> A is at index 0 of letters
'B' - 'A' = 1 -> B is at index 1 of letters
...
'Z' - 'A' = 25 -> Z is at index 25

same thing when letters are lowercase, using 'a'.

now the % 26 performs some round robin on letters

thus (('F' + 21) -'A') % 26 gives 0

then coming back in the letters range: 0 + 'A' = 'A'

As described in your title, this is just a Caesar cypher in C.

g t
  • 7,287
  • 7
  • 50
  • 85
OznOg
  • 4,440
  • 2
  • 26
  • 35
0

According to ECMA-334 (C# language spec):

The char type is used to represent Unicode code units. A variable of type char represents a single 16-bit Unicode code unit.

According to the unicode.org glossary*:

Code Unit. The minimal bit combination that can represent a unit of encoded text for processing or interchange. The Unicode Standard uses 8-bit code units in the UTF-8 encoding form, 16-bit code units in the UTF-16 encoding form, and 32-bit code units in the UTF-32 encoding form.

From these two resources we can infer that the char type is a 16-bit wide field of binary digits. What better way to implement a 16-bit wide field of binary digits than as a 16-bit integer, hmmmm?

autistic
  • 1
  • 3
  • 35
  • 80