Let’s consider the following scenario: I have two Python processes receiving the same events and I have to measure the delay between when process A received the event and when process B received it, as precisely as possible (i.e. less than 1ms).
Using Python 2.7 and a Unix system you can use the
which provides the time in seconds since Epoch and has a typical
resolution of a fraction of a ms on Unix. You can use it on different
processes and still compare the results, since both processes receive
the time since Epoch, a defined and fixed time in the past.
On Windows time.time also provides the time since Epoch, but the resolution is in the range of 10ms, which is not suitable for my application.
There is also
time.clock which is super precise on Windows, and much
less precise on Unix. The mayor drawback is that it returns the time
since the process started or since the first call of
within that processes. This means you cannot compare the results of
time.clock between two processes as they are not calibrated to a
I had high hopes for Python 3.3 where the
time module was revamped and
I was reading about
time.perf_counter looked like it would suit my needs as the
documentation said it
provides the highest available resolution for the system and was
“system-wide”, in contrast to for example the new
which was “process_wide”. Unfortunately it turned out that
time.perf_counter acts similar to
time.clock on Python 2.7 as it
provides you with the time since the process started or the first time
the method was called within the process. The results of
time.monotonic are comparable between processes, but again not precise
enough on Windows.
Here is a small script which demonstrates how the times provided by
time.perf_counter are not comparable between
processes. It starts two processes and lets both of them print out the
output of the timer to stdout. In the output the times should be
monotonically increasing. Since I let process 2 sleep for one second
before calling the timer method for the first time, the output of this
process is usually one second smaller when using
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
The result when running on Windows with Python 3.3:
$ python timertest.py Using clock P1 6.146032526480321e-06 P1 0.9926582847820045 P2 2.9612702173041547e-05 P1 1.9941743992602412 P2 1.0008579302676737 P2 2.0022709590185346 Using time P1 1368614235.509732 P1 1368614236.511172 P2 1368614236.601301 P1 1368614237.512612 P2 1368614237.602741 P2 1368614238.604181 Using monotonic P1 484.636 P1 485.63800000000003 P2 485.738 P1 486.639 P2 486.73900000000003 P2 487.741 Using perf_counter P1 12.390910576623565 P1 13.39050745276285 P2 7.542858100680394e-06 P1 14.39190763071843 P2 1.0014012954160376 P2 2.0041399116368144
So as far as I see it, there is no way of getting comparable times between two processes on Windows with more precision than 10ms. Is that correct or am I missing something?