If you want a number to be a double (number with decimals), you need to define it as a double, not an integer. I have this code which should solve your problem. Also make sure to compile gcc FILEPATH -lm -o OUTPUTPATH
if you are using UNIX.
#include <stdio.h>
#include <math.h>
int main()
{
double x, y, power, num = 1; //doubles allow for decimal places so declare it as double
int position = 1; //Position seems to only be an integer, so declare it as an int.
while (position <= 100)
{
num = 1/num;
num++;
x = num;
power = pow(x, x);
printf("%f", power);
position += 1;
num = position;
}
}
Another option is a for loop:
#include <stdio.h>
#include <math.h>
int main()
{
double x, y, power, num = 1;
for (int i = 1; i <= 100; i++) {
num = 1/num;
num = num + 1;
x = num;
power = pow(x, x);
printf("%f", power);
position += 1;
num = i;
}
}
If you are trying to approximate Euler's number, I don't see why not just try something like:
static const double E = 2.718281828459045;
I have simply corrected syntax errors in your program, but I don't think it will actually get you E. See this page about calculating E in C.