I am trying to understand how the various data types are controlling the way output of microtime() function is displayed?
here are the 3 scenarios:
1 . This will display the output of the microtime function in floating format since I have set the boolean value to true.
I want to know, how many decimal places by default will the float function display (in this case it is 4)?
How is that related to the architecture of the CPU?
lab$ php -r 'echo microtime(true),"\n";'
1361544317.2586
2 . Here, the output of microtime() function is being type cast to double format.
So, does it mean, it will show only upto 6 decimal places of microseconds? How is it different from float and its maximum size/precision?
lab$ php -r 'echo (double)microtime(),"\n";'
0.751238
3 . This is the default usage of microtime() function, where it prints both the microseconds and the unix epoch timestamp:
lab$ php -r 'echo microtime(),"\n";' output: 0.27127200 1361544378
I wanted to understand this, because in many places I have seen that microtime() is being used to generate the seed for mt_rand() PRNG in this way:
mt_srand((double)microtime() * 1000000)
What is the use of typecasting here and multiplying with 10^6? And what is the maximum possible value of the parameter to mt_srand()?
thanks.