Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions kernel/timeout.c
Original file line number Diff line number Diff line change
Expand Up @@ -379,3 +379,15 @@ uint64_t sys_clock_timeout_end_calc(k_timeout_t timeout)
return sys_clock_tick_get() + MAX(1, dt);
}
}

#ifdef CONFIG_ZTEST
void z_impl_sys_clock_tick_set(uint64_t tick)
{
curr_tick = tick;
}

void z_vrfy_sys_clock_tick_set(uint64_t tick)
{
z_impl_sys_clock_tick_set(tick);
}
#endif
4 changes: 2 additions & 2 deletions lib/posix/clock.c
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,8 @@ int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
}

uint64_t ticks = k_uptime_ticks();
uint64_t elapsed_secs = k_ticks_to_ms_floor64(ticks) / MSEC_PER_SEC;
uint64_t nremainder = ticks - k_ms_to_ticks_floor64(MSEC_PER_SEC * elapsed_secs);
uint64_t elapsed_secs = ticks / CONFIG_SYS_CLOCK_TICKS_PER_SEC;
uint64_t nremainder = ticks - elapsed_secs * CONFIG_SYS_CLOCK_TICKS_PER_SEC;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't look right. The conversion function should be precise and reduce to the division already on all systems where that's possible. The bug is in the second line I think, where it feeds the result back into the reverse conversion; that's going to lose bits in configurations where the tick rate isn't divisible by ms. Can you check that just the second line fixes the issue?

Also... I haven't pulled up the whole file to check, but I have a hard time believing that that nremainder value is anything but a footgun in an implementation of clock_gettime(). There shouldn't be any state held, and if we want higher precision values we should start there instead of back-converting from coarser units like seconds.

My notion of an idealized implementation here should be as simple as uint64_t ns = k_ticks_to_us_floor64(k_uptime_ticks()), and then just doing division/modulus ops to extract tv_nsec and tv_sec.

Copy link
Member Author

@cfriedt cfriedt Dec 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andyross - with just the second line, we get the failure below, which is off by an order of magnitude (on qemu_x86_64). Perhaps that's an indicator that z_tmcvt() needs a testsuite (Edit: I added a minimal one under tests/unit/time_units).

START - test_clock_gettime_rollover
CONFIG_SYS_CLOCK_TICKS_PER_SEC: 100
rollover_s: 184467440737095516
t-1: {18446744073709551, 596000000}

To me, straight division to get elapsed_secs makes sense.

The ticks value must always be >= elapsed_secs * CONFIG_SYS_CLOCK_TICKS_PER_SEC, so it looks like the nremainder calculation is a (potentially) cheaper way to get ticks % CONFIG_SYS_CLOCK_TICKS_PER_SEC.

Directly below, we use k_ticks_to_ns_floor32() to convert nremainder to nanoseconds, which seems ok to me.

Can you explain your issue in a bit more detail? And actually, this is the sort of thing that we should probably have a test for, so I'll implement a test for the scenario you outline if you can provide some additional detail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, the routines in time_units.h should have the set of k_ticks_to_secs_*() functions. It's milli, micro, and nano, but not full seconds. I guess the thought was that conversion to seconds is simple enough, one should write out the division, as is done here. But as soon as you toss in rates determined at run time or non-integral clock speeds, it would be nice to not make everyone write their own code for it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My notion of an idealized implementation here should be as simple as uint64_t ns = k_ticks_to_us_floor64(k_uptime_ticks()),

You meant _to_ns of course. This expression does overflow before a 64-bit cycle counter overflows, for clock rates < 1 GHz. It does take some 584 years to do this. So realistically, it ought to be good enough for any reasonable value of a timeout or time delta or timestamp.

However, @cfriedt's test looks at values around where the cycle counter overflows, which is large enough to trigger the overflow.


ts->tv_sec = (time_t) elapsed_secs;
/* For ns 32 bit conversion can be used since its smaller than 1sec. */
Expand Down
2 changes: 2 additions & 0 deletions subsys/testsuite/ztest/include/zephyr/ztest_test.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ extern "C" {
__syscall void z_test_1cpu_start(void);
__syscall void z_test_1cpu_stop(void);

__syscall void sys_clock_tick_set(uint64_t tick);

#ifdef __cplusplus
}
#endif
Expand Down
64 changes: 64 additions & 0 deletions tests/posix/common/src/clock.c
Original file line number Diff line number Diff line change
Expand Up @@ -122,3 +122,67 @@ ZTEST(posix_apis, test_posix_realtime)
zassert_true(rts.tv_nsec >= tv.tv_usec * NSEC_PER_USEC,
"gettimeofday didn't provide correct result");
}

static inline bool ts_gt(const struct timespec *a, const struct timespec *b)
{
__ASSERT_NO_MSG(a->tv_nsec < NSEC_PER_SEC);
__ASSERT_NO_MSG(a->tv_nsec >= 0);
__ASSERT_NO_MSG(b->tv_nsec < NSEC_PER_SEC);
__ASSERT_NO_MSG(b->tv_nsec >= 0);

if (a->tv_sec > b->tv_sec) {
return true;
}

if (a->tv_sec < b->tv_sec) {
return false;
}

if (a->tv_nsec > b->tv_nsec) {
return true;
}

return false;
}

static inline void ts_print(const char *label, const struct timespec *ts)
{
printk("%s: {%" PRIu64 ", %" PRIu64 "}\n", label, (uint64_t)ts->tv_sec,
(uint64_t)ts->tv_nsec);
}

ZTEST(posix_apis, test_clock_gettime_rollover)
{
uint64_t t;
struct timespec ts[3];
const uint64_t rollover_s = UINT64_MAX / CONFIG_SYS_CLOCK_TICKS_PER_SEC;

printk("CONFIG_SYS_CLOCK_TICKS_PER_SEC: %u\n", CONFIG_SYS_CLOCK_TICKS_PER_SEC);
printk("rollover_s: %" PRIu64 "\n", rollover_s);

t = UINT64_MAX - 1;
/* align to tick boundary */
k_sleep(K_TICKS(1));
sys_clock_tick_set(t);
zassert_ok(clock_gettime(CLOCK_MONOTONIC, &ts[0]));
ts_print("t-1", &ts[0]);
zassert_equal(rollover_s, ts[0].tv_sec);

t = UINT64_MAX;
/* align to tick boundary */
k_sleep(K_TICKS(1));
sys_clock_tick_set(t);
zassert_ok(clock_gettime(CLOCK_MONOTONIC, &ts[1]));
ts_print("t+0", &ts[1]);
zassert_equal(rollover_s, ts[1].tv_sec);
zassert_true(ts_gt(&ts[1], &ts[0]));

t = UINT64_MAX + 1;
/* align to tick boundary */
k_sleep(K_TICKS(1));
sys_clock_tick_set(t);
zassert_ok(clock_gettime(CLOCK_MONOTONIC, &ts[2]));
ts_print("t+1", &ts[2]);
zassert_equal(0, ts[2].tv_sec);
zassert_true(ts_gt(&ts[1], &ts[2]));
}