Commit Graph

173 Commits

Author SHA1 Message Date
Andrew Boie
a4c9190649 kernel: fix oops policy for k_thread_abort()
Don't generate a Z_OOPS() if k_thread_abort() is called on a
thread that isn't running. Just return to the caller instead,
much like how k_thread_join() functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-25 10:23:12 -07:00
Carles Cufi
4b37a8f3a4 Revert "global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()"
This reverts commit 8739517107.

Pull Request #23437 was merged by mistake with an invalid manifest.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-03-19 18:45:13 +01:00
Oleg Zhurakivskyy
8739517107 global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()
Replace all occurences of BUILD_ASSERT_MSG() with BUILD_ASSERT()
as a result of merging BUILD_ASSERT() and BUILD_ASSERT_MSG().

Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
2020-03-19 15:47:53 +01:00
Andrew Boie
2dc2ecfb60 kernel: rename struct _k_object
Private type, internal to the kernel, not directly associated
with any k_object_* APIs. Is the return value of z_object_find().
Rename to struct z_object.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie
322816eada kernel: add k_thread_join()
Callers will go to sleep until the thread exits, either normally
or crashing out.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-13 08:42:43 -04:00
Andrew Boie
6cf496f324 kernel: use sched lock for k_thread_suspend/resume
This logic should be using the sched_lock and not its own
separate lock for these two functions.

Some simplications were made; z_thread_single_resume and
z_thread_single_suspend were only used in one place, and there was
some redundant logic for whether to reschedule in the suspend case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 09:57:58 -04:00
Andrew Boie
896e32b414 kernel: remove problematic pend() assertion
This assertion, if built in, allows users threads to crash
the kernel in a critical section by passing a negative timeout
value, creating a DoS attack vector.

Remove this assertion, immediately below it there's a check
which just resets it to 0 anyway.

Fixes: #22999

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-21 08:57:07 -08:00
Andy Ross
eefd3daa81 kernel/smp: arch/x86_64: Address race with CPU migration
Use of the _current_cpu pointer cannot be done safely in a preemptible
context.  If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.

Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).

Note that the resulting _current expression now requires locking and
is going to be somewhat slower.  Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.

Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andy Ross
5737b5c843 kernel/sched: Re-add IPI calls on k_wakeup() and k_thread_priority_set()
These got dropped by an earlier patch, but are required on SMP systems
so synchronously notify other CPUs of changed scheduler state.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-04 17:50:11 -05:00
Andy Ross
c44d566aee kernel/sched: Re-fix SMP wait-for-switch on interrupt exit
This got clobbered by commit adac4cbafa in what I think was a rebase
mistake.  Without it, on SMP systems it's possible to select a new
_current thread and try to return into it before another CPU has
actually finished switching away from it.

Interestingly: the frequency with which this bug got caught once it
was reintroduced was much, much higher than it was when it was fixed
the first time due to the instruction pointer poisoning introduced in
the interrim.  Incompletely saved threads now have deliberately broken
state when assertions are enabled and will panic synchronously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-04 17:50:11 -05:00
Andy Ross
96ccc46e03 kernel/sched: Put k_thread_start() under a single lock
Similar to the suspend refactoring earlier, this really nees to be
done in an atomic block.  There were two confirmable races here,
though it's not completely clear either was being hit in practice:

1. The bit operations in z_mark_thread_as_started() aren't atomic so
   it needs to be protected.

2. The intermediate state in z_ready_thread() could result in a dead
   or suspended thread being added to the ready queue if another
   context tried a simultaneous abort or suspend.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Andy Ross
ed6b4fb21c kernel/sched: Properly synchronize pend()
Kernel wait_q's and the thread pended_on backpointer are scheduler
state and need to be modified under the scheduler lock.  There was one
spot in pend() where they were not.

Also unpack z_remove_thread_from_ready_q() into an unsynchronized
utility so that it can be called by this process in a single lock
block.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Daniel Leung
adac4cbafa sched: smp: fix thread marked dead but still running
Under SMP, when a thread is marked aborting, this thread may still
be running on another CPU. However, if there is only one thread
available to run, this thread may be selected to run again due to
next_up() not checking for the aborting state. Moreover, when
there is no IPI to signal to others k_thread_abort() being called,
the k_thread_abort() target thread is marked dead after a new
thread is selected to run. This causes the original thread calling
k_thread_abort() to mistaken that target thread is no longer
running and returns.

Note that, with working IPI, z_sched_ipi() is called as an ISR
to mark the target thread dead. A new thread is then selected to
run, so that the target thread would not be selected due to it
being dead.

This moves the code to mark thread dead into next_up(), where
the next best thread is selected, and the current thread being
swapped out. z_sched_ipi() now becomes an empty function, and
calls to it are removed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-01-31 11:46:35 -05:00
Andy Ross
3235451880 kernel/swap: Add SMP "wait for switch" synchronization
On SMP, there is an inherent race when swapping: the old thread adds
itself back to the run queue before calling into the arch layer to do
the context switch.  The former is properly synchronized under the
scheduler lock, and the later operates with interrupts locally
disabled.  But until somewhere in the middle of arch_switch(), the old
thread (that is in the run queue!) does not have complete saved state
that can be restored.

So it's possible for another CPU to grab a thread before it is saved
and try to restore its unsaved register contents (which are garbage --
typically whatever state it had at the last interrupt).

Fix this by leveraging the "swapped_from" pointer already passed to
arch_switch() as a synchronization primitive.  When the switch
implementation writes the new handle value, we know the switch is
complete.  Then we can wait for that in z_swap() and at interrupt
exit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andy Ross
e06ba702d5 kernel/sched: Address thread abort termination delay issue on SMP
It's possible for a thread to abort itself simultaneously with an
external abort from another thread.  In fact in our test suite this is
a common thing, as ztest will abort its own spawend threads at the end
of a test, as they tend to be exiting on their own.

When that happens, the thread marks itself DEAD and does all its
scheduler bookeeping, but it is STILL RUNNING on its own stack until
it makes its way to its final swap.  The external context would see
that "dead" metadata and return from k_thread_abort(), allowing the
next test to reuse and spawn the same thread struct while the old
context was still running.  Obviously that's bad.

Unfortunately, this is impossible to address completely without
modifying every SMP architecture to add a API-visible hook to every
swap that signals completion.  In practice the best we can do is add a
delay.  But note the optimization: almost always, the scheduler IPI
catches the running thread and kills it from interrupt context
(i.e. on a different stack).  When that happens, we know that the
interrupted thread will never be resumed (because it's dead) and can
elide the delay.  We only pay the cost when we actually detect a race.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andy Ross
60247ca149 kernel/sched: Correct IPI usage
These two spots were calling z_sched_ipi() (the IPI handler run under
the ISR, which is a noop here because obviously the current thread
isn't DEAD) and not arch_sched_ipi() (which triggers an IPI on other
CPUs to inform them of scheduling state changes), presumably because
of a typo.

Apparently we don't have tests for k_wakeup() and
k_thread_priority_set() that are sensitive to latency in SMP
contexts...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Anas Nashif
0ad67650f2 tracing: better positioning of tracing points
Improve positioning of tracing calls. Avoid multiple calls and missing
events because of complex logix. Trace the event where things happen
really.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-09 11:21:19 -05:00
Andy Ross
8bdabcc46b kernel/sched: Move thread suspend and abort under the scheduler lock
Historically, these routines were placed in thread.c and would use the
scheduler via exported, synchronized functions (e.g. "remove from
ready queue").  But those steps were very fine grained, and there were
races where the thread could be seen by other contexts (in particular
under SMP) in an intermediate state.  It's not completely clear to me
that any of these were fatal bugs, but it's very hard to prove they
weren't.

At best, this is fragile.  Move the z_thread_single_suspend/abort()
functions into the scheduler and do the scheduler logic in a single
critical section.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-08 14:21:10 +01:00
Anas Nashif
9e3e7f6dda kernel: use 'thread' for thread variable consistently
We have been using thread, th and t for thread variables making the code
less readable, especially when we use t for timeouts and other time
related variables. Just use thread where possible and keep things
consistent.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-12-21 19:57:57 -05:00
Danny Oerndrup
c9d78401cc spinlock: Make SPIN_VALIDATE a Kconfig option.
SPIN_VALIDATE is, as it was previously, enabled per default when having
less than 4 CPUs and either having no flash or a flash size greater than
32kB.

Small targets, which needs to have asserts enabled, can chose to have
the spinlock validation enabled or not and thereby decide whether the
overhead added is acceptable or not.

Signed-off-by: Danny Oerndrup <daor@demant.com>
2019-12-20 19:51:16 -05:00
Peter Bigot
8162e586e3 kernel: sched: assert when k_sleep invoked from interrupt context
Fix a gap where k_sleep(K_FOREVER) could execute a code path that
would not verify that the call was not from interrupt context.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-12-13 15:47:43 -05:00
Andy Ross
11a050b2c3 kernel/sched: Fix edge case in MetaIRQ preemption of cooperative threads
When a MetaIRQ preempts a cooperative thread, that thread would be
added back to the generic run queue.  When the MetaIRQ is done, the
highest priority thread will be selected to run, which may obviously
be a cooperative thread of a higher priority than the one that was
preempted.

But that's wrong, because the original thread was promised that it
would NOT be preempted until it reached a scheduling point on its own
(that's the whole point of a cooperative thread, of course).

We need to track the thread that got preempted (one per CPU) and
return to it instead of whatever else the scheduler might have found.

Fixes #20255

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-11-15 13:09:02 +01:00
Krzysztof Chruscinski
f831929cb5 kernel: Add assert to detect negative timeouts
Add assert when negative (except K_FOREVER) is passed as timeout.
Add negative timeout correction to 0.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2019-11-08 16:03:05 -08:00
Andrew Boie
d2b8922fa3 kernel: allow threads to sleep forever
Previously, passing K_FOREVER to k_sleep() would return
immediately.

Forever is a long time. Even if woken up at some point,
we still had forever to sleep, so return K_FOREVER in this
case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-09 00:36:34 +01:00
Andy Ross
8892406c1d kernel/sys_clock.h: Deprecate and convert uses of old conversions
Mark the old time conversion APIs deprecated, leave compatibility
macros in place, and replace all usage with the new API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-11-08 11:08:58 +01:00
Andrew Boie
4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Wayne Ren
b1fbe85156 kernel: need to release spinlock before busy_wait
need to release spinlock first before busy_wait,
or other cores cannot get the spinlock when the holder is
busy waitting.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-11-07 16:54:56 -05:00
Andrew Boie
8f0bb6afe6 tracing: simplify idle thread detection
We now define z_is_idle_thread_object() in ksched.h,
and the repeated definitions of a function that does
the same thing now changed to just use the common
definition.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
fe031611fd kernel: rename main/idle thread/stacks
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.

The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.

Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
e1ec59f9c2 kernel: renamespace z_is_in_isr()
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().

References from test cases changed to k_is_in_isr().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Anas Nashif
4abbd54cd5 tracing: remove useless ifdefing for CONFIG_TRACING
Tracing functions are noop if CONFIG_TRACING is disabled.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-30 10:49:37 -04:00
Andy Ross
d82f76a0bb kernel/sched: Don't make an IPI if we don't need it
If an architecture declares support for IPI, we still want to use it
only when running in SMP mode.

(This also fixes a build failure on ARC, which declares
CONFIG_SCHED_IPI_SUPPORTED but doesn't actually implement
z_arch_sched_ipi() yet).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
11bd67db53 kernel/idle: Use normal idle in SMP when IPI is available
Now that we have a working IPI framework, there's no reason for the
default spin loop for the SMP idle thread.  Just use the default
platform idle and send an IPI when a new thread is readied.

Long term, this can be optimized if necessary (e.g. only send the IPI
to idling CPUs, or check priorities, etc...), but for a 2-cpu system
this is a very reasonable default.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
cb3964f04f kernel/sched: Reset time slice on swap in SMP
In uniprocessor mode, the kernel knows when a context switch "is
coming" because of the cache optimization and can use that to do
things like update time slice state.  But on SMP the scheduler state
may be updated on the other CPU at any time, so we don't know that a
switch is going to happen until the last minute.

Expose reset_time_slice() as a public function and call it when needed
out of z_swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
d442927667 kernel/sched: Add missing SMP thread abort case
The loop in thread abort on SMP where we wait for the results on an
IPI correctly handled the case where a thread running on another CPU
gets its interrupt and self-aborts, but it missed the case where the
other thread pends before receiving the interrupt.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
b0158cc81f kernel/sched: Fix reschedule points in SMP
There were two related bugs when in SMP mode:

1. Underneath z_reschedule(), the code was inexplicably checking the
   swap_ok flag on the current CPU to see if it was OK to preempt the
   current thread, but reschedule is the DEFINITION of a schedule
   point and we always want to swap, even if the current thread is
   non-preemptible.

2. With similar symptoms: in k_yield() a previous fix correct the
   queue handling for SMP, but it missed the case where a thread of
   the SAME priority as _current was on the queue and would fail to
   swap.  Yielding must always add the current thread to the back of
   the current priority.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
075c94f6e2 kernel: Port remaining syscalls to new API
These calls are not accessible in CI test, nor do they get built on
common platforms (in at least one case I found a typo which proved the
code was truly unused).  These changes are blind, so live in a
separate commit.  But the nature of the port is mechanical, all other
syscalls in the system work fine, and any errors should be easily
corrected.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross
6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross
6f13980fc7 kernel/mutex: Fix locking to be SMP-safe
The mutex locking was written to use k_sched_lock(), which doesn't
work as a synchronization primitive if there is another CPU running
(it prevents the current CPU from preempting the thread, it says
nothing about what the others are doing).

Use the pre-existing spinlock for all synchronization.  One wrinkle is
that the priority code was needing to call z_thread_priority_set(),
which is a rescheduling call that cannot be called with a lock held.
So that got split out with a low level utility that can update the
schedule state but allow the caller to defer yielding until later.

Fixes #17584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-22 17:58:16 -04:00
Yasushi SHOJI
20d072465d kernel: sched: Do not force preempt when k_sched_unlock()
The scheduler lock is a nestable lock.  Unlocking a nested,
still-having, lock shouldn't preempt the current thread.

	k_sched_lock();
	k_sched_lock();
	k_sched_unlock();  /* <--- this shouldn't be a scheduling point */
	k_sched_unlock();  /* <--- this is a scheduling point */

This commit changes the preempt_ok argument from 1 to 0.  This let
should_preempt() check whether it should preempt at the point or not.

This fixes #17869.

Signed-off-by: Yasushi SHOJI <y-shoji@ispace-inc.com>
2019-08-06 10:19:50 +02:00
Wentong Wu
2463ded4c8 kernel: timeout: do not active time slicing if idle thread ready
zero slice_ticks when can't time slice so that next_timeout will
ignore slice_ticks of _current_cpu and system can stay low power
state longer time.

Fixes: #17368.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-07-24 14:02:23 -07:00
Andy Ross
4d8e1f223b kernel/sched: Fix k_thread_priority_set() on SMP
On SMP systems, currently scheduled threads are not in the run queue
and can't be unconditionally removoed/added.

Fixes #17170

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-07-12 14:09:16 -07:00
Andy Ross
ed7d86310f kernel/sched: Interpret zero timeslice time correctly
The scheduler API has always allowed setting a zero slice size as a
way to disable timeslicing.  But the workaround introduced for
CONFIG_SWAP_NONATOMIC forgot that convention, and was calling
reset_time_slice() with that zero value (i.e. requesting an immediate
interrupt) in circumstances where z_swap() had been interrupted
nonatomically.

In practice, this never happened.  And if it did, it was a single
spurious no-op interrupt that no one cared about.  Until it did,
anyway...

Now that ticks on nRF devices are at full 32 kHz speed, we can get
into a situation where the rapidly triggering timeslice interrupts are
interrupting z_swap() calls, and the process feeds back on itself and
becomes self-sustaining.

Put that test into the time slice code itself to prevent this kind of
mistake in the future.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-07-02 22:52:29 -04:00
Anas Nashif
68c389c1f8 include: move system timer headers to include/drivers/timer/
Move internal and architecture specific headers from include/drivers to
subfolder for timer:

   include/drivers/timer

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-25 15:27:00 -04:00
Andrew Boie
3f974243be kernel: allow k_sleep(K_FOREVER)
Threads that are sleeping forever may be woken up with
k_wakeup(), this shouldn't fail assertions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-06-18 09:08:01 -04:00
Andy Ross
312b43f145 kernel/sched: Don't reschedule inside a nested lock
The internal "reschedule" API has always understood the idea that it
might run in a ISR context where it can't swap.  But it has always
been happy to swap away when in thread mode, even when the environment
contains an outer lock that would NOT be expecting to swap!  As it
happened, the way irq locks are implemented (they store flag state
that can be restored without context) this would "work" even though it
was completely breaking the synchronization promise made by the outer
lock.

But now, with spinlocks, the error gets detected (albeit in a clumsy
way) in debug builds.  The unexpected swap triggers SPIN_VALIDATE
failures in later threads (this gets reported as a "recursive" lock,
but what actually happened is that another thread got to run before
the lock was released and tried to grab the same lock).

Fix this so that swap can only be called in a situation where the irq
lock key it was passed would have the effect of unmasking interrupts.
Note that this is a real behavioral change that affects when swaps
occur: it's not impossible that there is code out there that actually
relies on this "lock breaking reschedule" for correct behavior.  But
our previous implementation was irredeemably broken and I don't know
how to address that.

Fixes #16273

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-06-03 12:03:48 -07:00
Charles E. Youse
a567831bed kernel/sched.c: add k_usleep() API function
Add k_usleep() API, analogous to k_sleep(), excepting that the argument
is in microseconds rather than milliseconds.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-05-21 23:09:16 -04:00
Charles E. Youse
b186303cb6 kernel/sched.c: refactor k_sleep() implementation for varied timescales
Current z_impl_k_sleep() does double duty, converting between units
specified by the API and ticks, as well as implementing the sleeping
mechanism itself. This patch separates the API from the mechanism,
so that sleeps need not be tied to millisecond timescales.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-05-21 23:09:16 -04:00
Andrew Boie
ae0d1b2b79 kernel: sched: move stack sentinel check earlier
Checking the stack sentinel may abort the current thread,
make this check before we determine what the next thread
to run is.

Fixes: #15037

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-29 22:13:40 -04:00
Patrik Flykt
21358baa72 all: Update unsigend 'U' suffix due to multiplication
As the multiplication rule is updated, new unsigned suffixes
are added in the code.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00