During the early boot process, in prepare_multithreading(), the kernel
structures and scheduler are not ready yet. In order to obtain entropy
for early works such as stack randomization, optionally use when present
the ISR-specific function that some drivers will provide.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some sys_rand32_get() implementation will use shared state and protect
that using some synchronization primitive such as a mutex or a
semaphore. It's too early in the boot process to use any of them,
which causes some issues.
Use the entropy API directly to set up the stack canaries.
This doesn't completely solve the problem, as some drivers will use the
same synchronization primitives anyway. Some drivers (e.g. the NRF5
entropy driver) provide an API to be used by ISRs that might be
suitable here, but not all drivers do that.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
This was in prepare_multithreading(), which was moved
to after driver initialization and not before it.
The function now really just prepares system threads.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
STACK_ALIGN has somewhat different semantics across our arches,
particularly ARC.
These checks are unnecessary, _new_thread() is required
to properly align stack sizes anyway.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Very simple implementation of deadline scheduling. Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This patch adds a set of priorities at the (numerically) lowest end of
the range which have "meta-irq" behavior. Runnable threads at these
priorities will always be scheduled before threads at lower
priorities, EVEN IF those threads are otherwise cooperative and/or
have taken a scheduler lock.
Making such a thread runnable in any way thus has the effect of
"interrupting" the current task and running the meta-irq thread
synchronously, like an exception or system call. The intent is to use
these priorities to implement "interrupt bottom half" or "tasklet"
behavior, allowing driver subsystems to return from interrupt context
but be guaranteed that user code will not be executed (on the current
CPU) until the remaining work is finished.
As this breaks the "promise" of non-preemptibility granted by the
current API for cooperative threads, this tool probably shouldn't be
used from application code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The scheduler rewrite added a regression in uniprocessor mode where
cooperative threads would be unexpectedly preempted, because nothing
was checking the preemption status of _current at the point where the
next-thread cache pointer was being updated.
Note that update_cache() needs a little more context: spots like
k_yield() that leave _current runable need to be able to tell it that
"yes, preemption is OK here even though the thread is cooperative'.
So it has a "preempt_ok" argument now.
Interestingly this didn't get caught because we don't test that. We
have lots and lots of tests of the converse cases (i.e. making sure
that threads get preempted when we expect them to), but nothing that
explicitly tries to jump in front of a cooperative thread.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
prepare_multithreading() was done very early as it had a call
to initialize the interrupt subsystem. This was causing problems
with stack pointer randomization as any HW-based entropy drivers
had not been initialized.
Move the call to initialize the interrupt system out of
prepare_multithreading(), which now really does just prepare
the system to start threads. This is now done after the PRE_KERNEL
phases.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Leading/trailing whitespace in prompts requires ugly workarounds in
genrest.py, as e.g. *prompt * is invalid RST. strip() all prompts in
Kconfiglib and get rid of the genrest.py workarounds. Add a warning too.
The Kconfiglib update has some unrelated cleanups and fixes (that won't
affect Zephyr).
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
These assertions snuck through in crossed pull requests. There's a
specific API for _wait_q_t now, you can't hit the list directly
(because it might be a tree).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one. Behavior as to thread
selection does not change. New features:
+ Unifies SMP and uniprocessing selection code (with the sole
exception of the "cache" trick not being possible in SMP).
+ The old static multi-queue implementation is gone and has been
replaced with a build-time choice of either a "dumb" list
implementation (faster and significantly smaller for apps with only
a few threads) or a balanced tree queue which scales well to
arbitrary numbers of threads and priority levels. This is
controlled via the CONFIG_SCHED_DUMB kconfig variable.
+ The balanced tree implementation is usable symmetrically for the
wait_q abstraction, fixing a scalability glitch Zephyr had when many
threads were waiting on a single object. This can be selected via
CONFIG_WAITQ_FAST.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The SMP testing missed the case where _Swap() decides to return back
into the _current. Obviously there is no valid switch handle for the
running thread into which we can restore, and everything blows up.
(What happened is that the new scheduler code opened up a spot where
k_thread_priority_set() does a _reschedule() unconditionally and
doens't check to see whether or not it's needed like the old code).
But that isn't incorrect! It's entirely possible that _Swap() may
find that no thread is runnable except _current (due, for example, to
another CPU racing the other thread you expected off to sleep or
something). Don't blow up, check and return a noop.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Rescheduling was called unconditionally at the end of k_mem_slab_free
call. It is necessary only when thread is pending in the wait queue.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Refactoring. Mempool wants to unpend all threads at once. It's
cleaner to do this in the scheduler instead of the IPC code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.
k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:
* System call handlers that acquire resources in the handler
have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
to the caller instead of just killing the calling thread,
even though the base API doesn't do these checks.
These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.
At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/
The macros now use the Z_ notation for private APIs.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.
FIFO/LIFOs are derived from k_queues and have had allocator functions
added.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This used to be done by hand but can easily be generated like
we do other switch statements based on object type.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.
Fixes#7285
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.
K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.
Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.
A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.
Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
An object's set of permissions is now also used as a form
of reference counting. If an object's permission bitmap gets
completely cleared, it is now possible to specify object type
specific cleanup functions to be implicitly called.
Currently no objects are enabled yet. Forthcoming patches
will do this on a per object basis.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.
We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.
However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.
Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The _thread_entry() is not really a part of the kernel but a part of
the zephyr's C runtime support library. Hence moving just the
function to lib/thread_entry.c
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Normally a syscall would check the current privilege level and then
decide to go to _impl_<syscall> directly or go through a
_handler_<syscall>.
__ZEPHYR_SUPERVISOR__ is a compiler optimization flag which will
make all the system calls from the kernel files directly link
to the _impl_<syscall>. Thereby reducing the overhead of checking the
privileges.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Add k_thread_foreach API to iterate over all the threads in
the system.
This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.
MPU devices that don't have these restrictions allocate
the heap as normal.
In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.
Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.
On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.
Fixes: #6814
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This was wrong in two ways, one subtle and one awful.
The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it). So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.
And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock. Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do. The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention. That this didn't break more things is amazing to me.
The rework is actually simpler than the original, thankfully. Though
there are some further subtleties:
* The lock state implied by irq_lock() allows the lock to be
implicitly released on context switch (i.e. you can _Swap() with the
lock held at a recursion level higher than 1, which needs to allow
other processes to run). So return paths into threads from _Swap()
and interrupt/exception exit need to check and restore the global
lock state, spinning as needed.
* The idle loop design specifies a k_cpu_idle() function that is on
common architectures expected to enable interrupts (for obvious
reasons), but there is no place to put non-arch code to wire it into
the global lock accounting. So on SMP, even CPU0 needs to use the
"dumb" spinning idle loop.
Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The smp_init() call was too early. Device and subsystem
initialization doesn't happen until after the main thread starts
running. Starting extra CPUs and allowing them to schedule threads
before their drivers are alive is a bad idea, even if it works in a
unit test.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Adding a new kernel object type or driver subsystem requires changes
in various different places. This patch makes it easier to create
those devices by generating as much as possible in compile time.
No behavior change.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.
Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.
Fixes#6907.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
There was a ton of junk in this header. Pare it down to just the
stuff actually used by code outside of sched.c, move the needed
internal stuff into sched.c itself, and drop everything else.
Note that (other than the tiny inlines that remain here in the header)
the scheduler interface exposed to the rest of the system is now
composed of just 12 functions.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
A red-black tree is maintained containing the metadata for all
dynamically created kernel objects, which are allocated out of the
system heap.
Currently, k_object_alloc() and k_object_free() are supervisor-only.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Ensure this value during static initialization (with build assertions),
and dynamic initializations through system calls.
If initial count is larger than the limit, it's possible for the count
to wraparound, causing locking issues.
Expanding the BUILD_ASSERT() macros after declaring a k_sem struct in
K_SEM_DEFINE() is necessary to support cases where a semaphore is
defined statically.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
In order to mitigate Spectre variant 2 (branch target injection), use
retpolines for indirect jumps and calls.
The newly-added hidden CONFIG_X86_NO_SPECTRE flag, which is disabled
by default, must be set by a x86 SoC if its CPU performs speculative
execution. Most targets supported by Zephyr do not, so this is
set to "y" by default.
A new setting, CONFIG_RETPOLINE, has been added to the "Security
Options" sections, and that will be enabled by default if
CONFIG_X86_NO_SPECTRE is disabled.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
The POSIX layer had a simple ready_one_thread() utility. Move this to
the scheduler API (with a prepended underscore -- it's an internal
API) so that it can be synchronized along with the rest of the
scheduler.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons. The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.
So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout(). (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes to the scheduler API means we can simplify this
further: move the assignment to mutex->owner outside the if(), which
removes the need to have an else clause (which just set that field to
NULL when the new_owner was already NULL); and we can likewise move
the irq_unlock() outside the block.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions. We can remove the header too.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps. Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>