| System.currentTimeMillis() example
As a result of running this 2 numbers were printed: 1426349294842 (milliseconds) and 1426349294 (seconds). Alternatively, you can use code such as the one below to obtain a Calendar object that you can work with:
That will print a human readable time in your local timezone. To understand exactly what System.currentTimeMills() returns we must first understand: what UTC is, what UNIX timestamps are and how they form a universal time keeping standard.
|
|
What is UTC?
UTC is the main universal time keeping standard. It stands for Coordinated Universal Time. People use it to measure time in a common way around the globe. Since it is essentially being used interchangeably with GMT (Greenwich Mean Time) you could think of it as the timezone defined around the Greenwich Meridian a.k.a. the Prime Meridian (0° longitude):
The current UTC date/time is . Your local time is
| |
What is a UNIX timestamp?
A UNIX timestamp, also known as Epoch Time or POSIX timestamp, is a representation of a moment defined as the time that has elapsed since a reference known as the UNIX epoch: 1970-01-01 00:00:00 UTC ( what is UTC). Using Java as an example, System.currentTimeMillis() returns just that, a UNIX timestamp in milliseconds - UNIX timestamps will often be measured in seconds as well (but System.currentTimeMillis() will always be in milliseconds). Following is a table that unifies these concepts:
A human readable moment in time |
2015-03-07 16:00 UTC |
|
|
The same moment as a UNIX timestamp in milliseconds |
1425744000000 |
get it with System.currentTimeMillis() in Java get it with new Date().getTime() in Javascript get it with round(microtime(true) * 1000) in PHP |
|
The same moment as a UNIX timestamp in seconds |
1425744000 |
get it with System.currentTimeMillis() / 1000 in Java get it with new Date().getTime() / 1000 in Javascript get it with time() in PHP |
You can therefore say that on 2015-03-07 16:00 UTC, 1425744000000 milliseconds (or 1425744000 seconds) have passed since 1970-01-01 00:00:00 UTC.
Note that System.currentTimeMillis() is based on the time of the system / machine it's running on.
In conclusion, the UNIX timestamp is just a system / format for representing a moment in time. Check out some other date / time systems. |
|
UNIX timestamps, UTC & timezones
One of the most important things to realize is that 2 persons calling System.currentTimeMillis() at the same time should get the same result no matter where they are one the planet:
The 'snapshot' above represents the same moment from 4 different perspectives: John's timezone, Juliette's timezone, UTC (Coordinated Universal Time) and milliseconds from the Unix epoch.
| |
Millisecond relevance & usage
- Easy time-keeping. Simple architectures.
Java's Calendar (and deprecated Date) itself stores time as milliseconds since the Unix Epoch. This is great for many reasons. It happens not only in Java but in other programming languages as well. In my experience i found the perfect time-keeping architecture emerges naturally from this: the Unix Epoch is January 1st, 1970, midnight, UTC. Therefore if you choose to store time as milliseconds since the Unix Epoch you have a lot of benefits:
- architecture clarity: server side works with UTC, client side shows the time through its local timezone
- database simplicity: you store a number (milliseconds) rather than complex data structures like DateTimes
- programming efficiency: in most programming languages you have date/time objects capable of taking milliseconds since Epoch when constructed (which allows for automatic conversion to client-side timezone)
Code & architecture become simpler and more flexible when using this approach, which is also sketched below:
- Benchmarking & performance monitoring can be done by calculating time differences ("deltas") between starting points and end points. For example you might want to test how differences in your implementation affect speed:
int[] myNumbers = new int[] { 5, 2, 1, 3, 4, 6, 9, 7, 2, 1 }; long start = System.currentTimeMillis(); sort(myNumbers); System.out.println("Duration: " + (System.currentTimeMillis() - start));
What might happen above is that the sorting for such a small array could be so fast that the duration shows up as zero. For this you can use the trick of repeating the method call multiple times so that you get more relevant durations & comparisons:
long start = System.currentTimeMillis(); for (int i = 0; i < 1000; i++) { int[] myNumbers = new int[] { 5, 2, 1, 3, 4, 6, 9, 7, 2, 1 }; sort(myNumbers); } System.out.println("Duration: " + (System.currentTimeMillis() - start));
- Log investigations to document durations and to reverse engineer relevant points in time based on them. E.g. a system might not log the start of an activity but it might log an abnormal duration when it ends. This would look something like:
The easiest way to manually deduce when the method started would be to convert 2015-03-15 10:13:32:564 into milliseconds, subtract 3367, then convert back to a time. The starting point of the activity is relevant because other activities that ran in parallel can be investigated. It's important to note though that a line such as the one above is normally indicative of poor logging because if the starting point is relevant it can be calculated programmatically (and printed) at the moment of logging.
- Unique security tokens that can be safely sent through the network - e.g. one shot tokens are usually encrypted strings sent between 2 parties that (want to) trust each other over a less secure medium such as a network requests / responses that can theoretically be sniffed. One shot tokens are a way to authenticate a party to another. The 'uniqueness' idea comes to solve the sniffing (or one shot token theft) problem: a one shot token should have a lifespan of 'one attempt' (preventing further, malicious attempts) and for security reasons the lifetime of the one shot token itself should be minimised. Therefore these tokens sometimes encrypt a high-resolution timestamp within their content or might base the salt value on such a timestamp.
- Unique identifiers also known as UIDs that can be sent through the network even in plaintext, unencrypted, because their value is relevant to the involved parties only to identify a single conversation among a series of conversations. For example, in a web service context where one entity asks a question and the other answers asynchronously, there are many entities which can become request initiators but still refer to the same topic / conversation. When multiple topics are involved, a UID is necessary to correlate asynchronous callbacks with initial requests. Such conversations happen not only via networks between connected entities, but inside such entities as well between server modules that produce / store / consume information in any way that needs a key-based lookup. For example, a record in a database can have, depending on utility, a time-based unique id as a primary key - that time can actually be the insertion time of the record into the database.
- Objects with limited life - sessions should usually expire after an inactivity period which can be precisely defined for security reasons. One shot tokens (described just above) should also have a limited lifetime. In a database context, records can have a relevance that is limited in time and therefore they can expire as well.
- Smear techniques are used to cushion & spread a delay over a time period. For example: Leap seconds are one-second adjustments added to the UTC time to synchronize it with solar time. Leap seconds tend to cause trouble with software. For example, on June 30, 2012 you had the time 23:59:60. Google uses a technique called leap smear on its servers, which, instead of adding an extra second, extends seconds prior to the end of the day by a few milliseconds each so that the day will last 1000 milliseconds longer. A different but related programming challenge is to keep timestamps in the database in an ascending order during leap seconds. Below is a pseudocode solution involving a technique similar to the 'smear':
if (newTime is in a leap second){
read smearCount from db; if (smearCount <= 0) { smearCount = 1000;
update smearCount in db;
} newTime += smearCount; insert newTime into db; } else {
read smearCount from db;
if (smearCount > 0){ smearCount -= 1; update smearCount in db; newTime += smearCount; } insert newTime into db; }
|
|
Instant.now().toEpochMilli()
This is an API introduced with Java 8 and probably promoted as a modern replacement for legacy time-keeping code. In this context, essentially, Instant.now().toEpochMillis() would be an upcoming equivalent for System.currentTimeMillis(). To replicate the original Java example that we gave, this is how the UNIX timestamp would be obtained:
or in one line: System.out.println(Instant.now().toEpochMilli()); Getting an Instant back from the milliseconds (instead of a Calendar) can be done like this:
However, for the sake of intense usage, my experiments show that Instant.now().toEpochMilli() is approximately 1.5 times slower than System.currentTimeMillis().
| |
Precision vs Accuracy, currentTimeMillis() vs nanoTime()
System.currentTimeMillis() offers precision to the millisecond but its accuracy still depends on the underlying machine. This is why, in some cases, it might happen that two subsequent calls can return the same number even if they are in fact more than 1ms apart. The same is true about nanoTime(): precision is down to the nanosecond but accuracy doesn't follow, it still depends on the machine.
System.nanoTime(), however, returning nanoseconds, may arguably be better suited to measure deltas (although reportedly a nanoTime() call can be slower than a currentTimeMillis() call - my experiments contradict this: they seem to take exactly the same amount of time). nanoTime()'s disadvantage is it doesn't have a fixed reference point like currentTimeMillis() has the Unix epoch. nanoTime()'s reference might change with a system/machine restart, so from the mentioned methods currentTimeMillis() is the one to use for time-keeping.
| |
UTC vs GMT. The same or different?
UTC stands for Coordinated Universal Time. GMT stands for Greenwich Mean Time. UTC is a universal time keeping standard by itself and a GMT successor. A time expressed in UTC is essentially the time on the whole planet. A time expressed in GMT is the time in the timezone of the Greenwich meridian. In current computer science problems (and probably most scientific ones) UTC and GMT expressed in absolute value happen to have identical values so they have been used interchangeably.
A detailed analysis reveals that literature and history here are a bit ambiguous. UTC essentially appeared in 1960, GMT being the ‘main thing’ until then. Unlike GMT which is based on solar time and originally calculated a second as a fraction of the time it takes for the Earth to make a full rotation around its axis, UTC calculates a second as “the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom”. UTC’s second is far more precise than GMT's original second. In 1972 leap seconds were introduced to synchronize UTC time with solar time. These 2 turning points (different definition of a second and the introduction of leap seconds) ‘forced’ GMT to be the same as UTC based on what seemed a gradual, tacit convention. If you were to calculate true GMT today i would see it based on its original definition of 1 second = 1/86400 days and this would for sure return a different absolute value than what UTC gives us. From this point of view the name “GMT” seems deprecated, but kept around for backward compatibility, traditional timezone based representation of time and sometimes legal reasons.
| |
| |