Below is a function I created to convert a decimal number to binary:
int dtob(int n)
{
int bin = 0;
int i=0;
while(n>=1)
{
bin += pow(10,i)*(n%2);
n=n/2;
i++;
}
return bin;
}
Now this function gives correct binary value for some inputs such as 0 through 3 and then 8 through 11, but gives an incorrect input offset by 1, when dealing with inputs like 4,5,6,12 etc.
Could someone tell me what flaw there is in the logic that I have used??