Add runtime error checking to k_pipe_cleanup and k_pipe_get and remove
asserts.
Adapted test which was expecting a fault to handle errors instead.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Remove static helper functions used only once and integrate them into
calling functions.
In k_sem_take, return at the end.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Check for errors at runtime and stop depending on ASSERTs.
This changes the API for
- k_sem_init
k_sem_init now returns -EINVAL on invalid data.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_mutex_unlock will now perform error checking and return on failures.
If the current thread does not own the mutex, we will now return -EPERM.
In the unlikely situation where we own a lock and the lock count is
zero, we assert. This is considered an undefined bahviour and should not
happen.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Improve positioning of tracing calls. Avoid multiple calls and missing
events because of complex logix. Trace the event where things happen
really.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Historically, these routines were placed in thread.c and would use the
scheduler via exported, synchronized functions (e.g. "remove from
ready queue"). But those steps were very fine grained, and there were
races where the thread could be seen by other contexts (in particular
under SMP) in an intermediate state. It's not completely clear to me
that any of these were fatal bugs, but it's very hard to prove they
weren't.
At best, this is fragile. Move the z_thread_single_suspend/abort()
functions into the scheduler and do the scheduler logic in a single
critical section.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original implementation left this function hidden in init.h which
prevented it from showing up in documentation. Move it to kernel.h,
and document it consistent with the other functions that allow caller
customization based on context.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Allow caller to supply 0 events in which case the function just
does the sleep. This is useful so that the caller does not need
to create artificial events.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Same deal as in commit 41713244b3 ("kconfig: Remove '# Hidden' comments
on promptless symbols"). I forgot to do a case-insensitive search.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
We have been using thread, th and t for thread variables making the code
less readable, especially when we use t for timeouts and other time
related variables. Just use thread where possible and keep things
consistent.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Implement thread foreach processing with limited locking
to allow threads processing that may take more time but allows
missing some threads processing when the thread list is modified.
Signed-off-by: Radoslaw Koppel <radoslaw.koppel@nordicsemi.no>
SPIN_VALIDATE is, as it was previously, enabled per default when having
less than 4 CPUs and either having no flash or a flash size greater than
32kB.
Small targets, which needs to have asserts enabled, can chose to have
the spinlock validation enabled or not and thereby decide whether the
overhead added is acceptable or not.
Signed-off-by: Danny Oerndrup <daor@demant.com>
Fix a gap where k_sleep(K_FOREVER) could execute a code path that
would not verify that the call was not from interrupt context.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
64-bit systems generate some compiler warnings about
data type sizes, use uintptr_t where int/u32_t was being cast
to void *.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We need a size_t and not a u32_t for partition sizes,
for 64-bit compatibility.
Additionally, app_memdomain.h was also casting the base
address to a u32_t instead of a uintptr_t.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Builds of docs with doxygen 1.8.16 has a number of warnings of the form:
'warning: unbalanced grouping commands'. Fix those warnings be either
balancing the group command or removing it.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Device initialization may require use of generic services such as
starting up power rails, some of which may be controlled by GPIOs on
an external controller that can't be used until full kernel services
are available. Generic services can check k_is_in_isr() and mediate
their behavior that way, but currently have no way to determine that
the kernel is not available.
Provide a function that indicates whether initialization is still in
pre-kernel stages where no kernel services are available.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Remove leading/trailing blank lines in .c, .h, .py, .rst, .yml, and
.yaml files.
Will avoid failures with the new CI test in
https://github.com/zephyrproject-rtos/ci-tools/pull/112, though it only
checks changed files.
Move the 'target-notes' target in boards/xtensa/odroid_go/doc/index.rst
to get rid of the trailing blank line there. It was probably misplaced.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
The k_mutex is a priority-inheriting mutex, so on unlock it's possible
that a thread's priority will be lowered. Make this a reschedule
point so that reasoning about thread priorities is easier (possibly at
the cost of performance): most users are going to expect that the
priority elevation stops at exactly the moment of unlock.
Note that this also reorders the code to fix what appear to be obvious
race conditions. After the call to z_ready_thread(), that thread may
be run (e.g. by an interrupt preemption or on another SMP core), yet
the return value and mutex weren't correctly set yet. The spinlock
was also prematurely released.
Fixes#20802
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Just use printk directly instead of going over defines.
For some reason, this change lets us pass on master when running
tests/kernel/timer/timer_monotonic test. This test started failing after
rc2 was tagged, just because the changing git version string passing to
BUILD_VERSION. This is still under investigation.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
When suspending a thread, cancel any pending timeouts which might wake
it up unexpectedly. Also, make suspending the current thread
(specifically) a schedule point, as callers are clearly going to
expect that to be synchronous.
Also fix a documentation weirdness. The phrasing in the earlier docs
for k_thread_suspend() was confusing: it could be interpreted as
either document the current (essentially buggy) behavior that threads
will "wake up" due to preexisting timeouts, OR to mean that thread
timeouts will continue to be tracked so that resuming a thread that
was sleeping will continue to sleep until the timeout (something that
has never been implemented: k_sleep() is implemented on top of
suspend). Rewrite to document what we actually implement.
Fixes#20033
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In some platforms the size of size_t can be different of 4 bytes. Use
sys_rand_get to proper fill this variable.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
STACK_CANARIES relies on random value for the canarie so
ENTROPY_GENERATOR or TEST_RANDOM_GENERATOR needs to be
selected to get sys_rand32_get included in the build.
Fixes: #20587
Signed-off-by: David Leach <david.leach@nxp.com>
When a MetaIRQ preempts a cooperative thread, that thread would be
added back to the generic run queue. When the MetaIRQ is done, the
highest priority thread will be selected to run, which may obviously
be a cooperative thread of a higher priority than the one that was
preempted.
But that's wrong, because the original thread was promised that it
would NOT be preempted until it reached a scheduling point on its own
(that's the whole point of a cooperative thread, of course).
We need to track the thread that got preempted (one per CPU) and
return to it instead of whatever else the scheduler might have found.
Fixes#20255
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add assert when negative (except K_FOREVER) is passed as timeout.
Add negative timeout correction to 0.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Previously, passing K_FOREVER to k_sleep() would return
immediately.
Forever is a long time. Even if woken up at some point,
we still had forever to sleep, so return K_FOREVER in this
case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Entering irq_offload() on multiple CPUs can cause
difficult to debug/reproduce crashes. Demote irq_offload()
to non-inline (it never needed to be inline anyway) and
wrap the arch call in a semaphore.
Some tests which were unnecessarily killing threads
have been fixed; these threads exit by themselves anyway
and we won't leave the semaphore dangling.
The definition of z_arch_irq_offload() moved to
arch_interface.h as it only gets called by kernel C code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Mark the old time conversion APIs deprecated, leave compatibility
macros in place, and replace all usage with the new API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.
This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
need to release spinlock first before busy_wait,
or other cores cannot get the spinlock when the holder is
busy waitting.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
This commit modifies the z_new_thread_init function, that was
previously declared as ALWAYS_INLINE to be a normal function.
z_new_thread_init function is only called by the z_arch_new_thread
function and, since this is not a performance-critical function, there
is no good justification for inlining it.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.
The refactoring strategy used in this commit is detailed in the issue
This commit introduces the following major changes:
1. Establish a clear boundary between private and public headers by
removing "kernel/include" and "arch/*/include" from the global
include paths. Ideally, only kernel/ and arch/*/ source files should
reference the headers in these directories. If these headers must be
used by a component, these include paths shall be manually added to
the CMakeLists.txt file of the component. This is intended to
discourage applications from including private kernel and arch
headers either knowingly and unknowingly.
- kernel/include/ (PRIVATE)
This directory contains the private headers that provide private
kernel definitions which should not be visible outside the kernel
and arch source code. All public kernel definitions must be added
to an appropriate header located under include/.
- arch/*/include/ (PRIVATE)
This directory contains the private headers that provide private
architecture-specific definitions which should not be visible
outside the arch and kernel source code. All public architecture-
specific definitions must be added to an appropriate header located
under include/arch/*/.
- include/ AND include/sys/ (PUBLIC)
This directory contains the public headers that provide public
kernel definitions which can be referenced by both kernel and
application code.
- include/arch/*/ (PUBLIC)
This directory contains the public headers that provide public
architecture-specific definitions which can be referenced by both
kernel and application code.
2. Split arch_interface.h into "kernel-to-arch interface" and "public
arch interface" divisions.
- kernel/include/kernel_arch_interface.h
* provides private "kernel-to-arch interface" definition.
* includes arch/*/include/kernel_arch_func.h to ensure that the
interface function implementations are always available.
* includes sys/arch_interface.h so that public arch interface
definitions are automatically included when including this file.
- arch/*/include/kernel_arch_func.h
* provides architecture-specific "kernel-to-arch interface"
implementation.
* only the functions that will be used in kernel and arch source
files are defined here.
- include/sys/arch_interface.h
* provides "public arch interface" definition.
* includes include/arch/arch_inlines.h to ensure that the
architecture-specific public inline interface function
implementations are always available.
- include/arch/arch_inlines.h
* includes architecture-specific arch_inlines.h in
include/arch/*/arch_inline.h.
- include/arch/*/arch_inline.h
* provides architecture-specific "public arch interface" inline
function implementation.
* supersedes include/sys/arch_inline.h.
3. Refactor kernel and the existing architecture implementations.
- Remove circular dependency of kernel and arch headers. The
following general rules should be observed:
* Never include any private headers from public headers
* Never include kernel_internal.h in kernel_arch_data.h
* Always include kernel_arch_data.h from kernel_arch_func.h
* Never include kernel.h from kernel_struct.h either directly or
indirectly. Only add the kernel structures that must be referenced
from public arch headers in this file.
- Relocate syscall_handler.h to include/ so it can be used in the
public code. This is necessary because many user-mode public codes
reference the functions defined in this header.
- Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
necessary to provide architecture-specific thread definition for
'struct k_thread' in kernel.h.
- Remove any private header dependencies from public headers using
the following methods:
* If dependency is not required, simply omit
* If dependency is required,
- Relocate a portion of the required dependencies from the
private header to an appropriate public header OR
- Relocate the required private header to make it public.
This commit supersedes #20047, addresses #19666, and fixes#3056.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
We need to pass system call args using a register-width
data type and not hard-code this to u32_t.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Use this short header style in all Kconfig files:
# <description>
# <copyright>
# <license>
...
Also change all <description>s from
# Kconfig[.extension] - Foo-related options
to just
# Foo-related options
It's clear enough that it's about Kconfig.
The <description> cleanup was done with this command, along with some
manual cleanup (big letter at the start, etc.)
git ls-files '*Kconfig*' | \
xargs sed -i -E '1 s/#\s*Kconfig[\w.-]*\s*-\s*/# /'
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
Clean up space errors and use a consistent style throughout the Kconfig
files. This makes reading the Kconfig files more distraction-free, helps
with grepping, and encourages the same style getting copied around
everywhere (meaning another pass hopefully won't be needed).
Go for the most common style:
- Indent properties with a single tab, including for choices.
Properties on choices work exactly the same syntactically as
properties on symbols, so not sure how the no-indentation thing
happened.
- Indent help texts with a tab followed by two spaces
- Put a space between 'config' and the symbol name, not a tab. This
also helps when grepping for definitions.
- Do '# A comment' instead of '#A comment'
I tweaked Kconfiglib a bit to find most of the stuff.
Some help texts were reflowed to 79 columns with 'gq' in Vim as well,
though not all, because I was afraid I'd accidentally mess up
formatting.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
In z_fatal_error() we invoke the arch-specific API that
evaluates whether we are in a nested exception. We then
use the result to log a message that the error occurred
in ISR. In non-test mode, we unconditionally panic, if
an exception has occurred in an ISR and the fatal error
handler has not returned (apart from the case of an
error in stack sentinel check).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Duplicate definitions elsewhere have been removed.
A couple functions which are defined by the arch interface
to be non-inline, but were implemented inline by native_posix
and intel64, have been moved to non-inline.
Some missing conditional compilation for z_arch_irq_offload()
has been fixed, as this is an optional feature.
Some massaging of native_posix headers to get everything
in the right scope.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The intel64 switch implementation doesn't actually use a switch handle
per se, just the raw thread struct pointers which get stored into the
handle field. This works fine for normally initialized threads, but
when switching out of a dummy thread at initialization, nothing has
initialized that field and the code was dumping registers into the
bottom of memory through the resulting NULL pointer.
Fix this by skipping the load of the field value and just using an
offset instead to get the struct address, which is actually slightly
faster anyway (a SUB immediate instruction vs. the load).
Actually for extra credit we could even move the switch_handle field
to the top of the thread struct and eliminate the instruction
entirely, though if we did that it's probably worth adding some
conditional code to make the switch_handle field disappear entirely.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
include/sys/arch_inlines.h will contain all architecture APIs
that are used by public inline functions and macros,
with implementations deriving from include/arch/cpu.h.
kernel/include/arch_interface.h will contain everything
else, with implementations deriving from
arch/*/include/kernel_arch_func.h.
Instances of duplicate documentation for these APIs have been
removed; implementation details have been left in place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Re-run with updated script to convert integer literal delay arguments
to k_thread_create and K_THREAD_DEFINE to use the standard timeout
macros.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This maximum is implicit in the kernel support for SMP, e.g.,
kernel/init.c and kernel/smp.c assume CONFIG_MP_NUM_CPUS <= 4.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
A loop in k_mem_pool_alloc() around z_sys_mem_pool_block_alloc() assumes
the later may return -EAGAIN with an elaborate comment about it. But
-EAGAIN is no longer returned by that function since commit 7845e1b01e
("lib/mempool: Fix spurious -ENOMEM due to agressive latency control").
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This commit adds new k_work_poll interface. It allows to
submit given work to a workqueue automatically when one of the
watched pollable objects changes its state.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
This commit separates k_poll() infrastructure from k_poll() API
implementation, allowing other (future) API calls to use the same
framework.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Remove FUNC_NORETURN attribute from _StackCheckHandler to address the
following warning from gcc-9.2:
kernel/compiler_stack_protect.c:62:32: error: '__stack_chk_fail'
specifies less restrictive attribute than its target
'_StackCheckHandler': 'noreturn' [-Werror=missing-attributes]
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
We now define z_is_idle_thread_object() in ksched.h,
and the repeated definitions of a function that does
the same thing now changed to just use the common
definition.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This takes an entry point and not a thread as argument.
Rename to z_is_idle_thread_entry() to make this clearer.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.
The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.
Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These are renamed to z_timestamp_main and z_timestamp_idle,
and now specified in kernel_internal.h.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface and
has been renamed z_arch_kernel_init().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.
z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().
References from test cases changed to k_is_in_isr().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
An #endif and the brace terminating a compound statement were
transposed, causing compilation errors with the above-specified
combination of configuration options.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
The callback function has been ignored in z_timeout_init() since the
timer rework in fall 2018. Passing real handlers to it in code is
distracting when they will be overridden by whatever callback is
provided in z_add_timeout().
As this function is an internal API deprecation is not necessary.
Remove the parameter and change all call sites to drop the argument.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Corrected the define of SMP_FALLBACK to prevent llvm warning.
llvm issues a warning as the behaviour of using defined(x) inside a
macro expansion is undefined (https://reviews.llvm.org/D15866).
Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
If an architecture declares support for IPI, we still want to use it
only when running in SMP mode.
(This also fixes a build failure on ARC, which declares
CONFIG_SCHED_IPI_SUPPORTED but doesn't actually implement
z_arch_sched_ipi() yet).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The timeout code has an optimization where it refuses to send a new
timeout to the driver unless it is sooner than one already scheduled.
This won't work on SMP, though, because the timeout value when
timeslicing is enabled depends on the current thread, and on SMP the
decision as to the next thread will not be made until later (when we
swap, or exit an interrupt).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that we have a working IPI framework, there's no reason for the
default spin loop for the SMP idle thread. Just use the default
platform idle and send an IPI when a new thread is readied.
Long term, this can be optimized if necessary (e.g. only send the IPI
to idling CPUs, or check priorities, etc...), but for a 2-cpu system
this is a very reasonable default.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Our thread struct gets initialized piecewise in a bunch of locations
(this is sort of a design flaw). The is_idle field, which was
introduced to identify idle threads in SMP (where there can be more
than one), was correctly set for idle threads but was being left
uninitialized elsewhere, and in a tiny handful of cases was turning up
nonzero.
The case in pipes. was particularly vexsome, as that isn't a thread at
all but one of the "dummy" threads used for timeouts (another design
flaw IMHO).
Get this right everywhere.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In uniprocessor mode, the kernel knows when a context switch "is
coming" because of the cache optimization and can use that to do
things like update time slice state. But on SMP the scheduler state
may be updated on the other CPU at any time, so we don't know that a
switch is going to happen until the last minute.
Expose reset_time_slice() as a public function and call it when needed
out of z_swap().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The loop in thread abort on SMP where we wait for the results on an
IPI correctly handled the case where a thread running on another CPU
gets its interrupt and self-aborts, but it missed the case where the
other thread pends before receiving the interrupt.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There were two related bugs when in SMP mode:
1. Underneath z_reschedule(), the code was inexplicably checking the
swap_ok flag on the current CPU to see if it was OK to preempt the
current thread, but reschedule is the DEFINITION of a schedule
point and we always want to swap, even if the current thread is
non-preemptible.
2. With similar symptoms: in k_yield() a previous fix correct the
queue handling for SMP, but it missed the case where a thread of
the SAME priority as _current was on the queue and would fail to
swap. Yielding must always add the current thread to the back of
the current priority.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
z_spin_lock_valid() reads shared variable twice to do two checkings. If
this variable is modified by other CPU between two read accesses, the
checking value is inconsistent. This inconsistency causes the error
that CPU0 can pass the checking when it doesn't hold spinlock because
zeroed-out thread_cpu value is ambiguous with the CPU0 ID.
Fix the inconsistency by only reading shared variable once and using
local variable value to do two checkings.
Fixes#19299.
Signed-off-by: Jim Shu <cwshu@andestech.com>
The boot time measurement sample was giving bogus values on x86: an
assumption was made that the system timer is in sync with the CPU TSC,
which is not the case on most x86 boards.
Boot time measurements are no longer permitted unless the timer source
is the local APIC. To avoid issues of TSC scaling, the startup datum
has been forced to 0, which is in line with the ARM implementation
(which is the only other platform which supports this feature).
Cleanups along the way:
As the datum is now assumed zero, some variables are removed and
calculations simplified. The global variables involved in boot time
measurements are moved to the kernel.h header rather than being
redeclared in every place they are referenced. Since none of the
measurements actually use 64-bit precision, the samples are reduced
to 32-bit quantities.
In addition, this feature has been enabled in long mode.
Fixes: #19144
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Initial thread creation and tracing information
occurs with empty thread names. For better tracing information,
we need to a way to get actual thread names if they are set
in order to better track thread names and their IDs.
Signed-off-by: Nicholas Lowell <nlowell@lexmark.com>
It was reported in the code coverage report that Z_SYSCALL_HANDLER() was
not called by other code, if we run "sanitycheck -p qemu_x86 --coverage
-T tests/kernel/device/".
The root cause is that we include "errno.h", which includes
"include/generated/syscalls/device.h". It causes that the
declare of device_get_binding() in "include/generated/syscalls/device.h"
is marked as "has been called", rather than Z_SYSCALL_HANDLER()
in device.c.
So I remove "#include <errno.h>", which is useless in device.c. Also,
"#include <sys/util.h>" is removed for the same reason.
Signed-off-by: Steven Wang <steven.l.wang@linux.intel.com>
The semi-automated API changes weren't checkpatch aware. Fix up
whitespace warnings that snuck into the previous patches. Really this
should be squashed, but that's somewhat difficult given the structure
of the series.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These calls are not accessible in CI test, nor do they get built on
common platforms (in at least one case I found a typo which proved the
code was truly unused). These changes are blind, so live in a
separate commit. But the nature of the port is mechanical, all other
syscalls in the system work fine, and any errors should be easily
corrected.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These calls are buildable on common sanitycheck platforms, but are not
invoked at runtime in any tests accessible to CI. The changes are
mostly mechanical, so the risk is low, but this commit is separated
from the main API change to allow for more careful review.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
1) Dump time sinse last scheduler call
Could be handy for tickless kernel debug.
Will indicate that no rtc irq is called
2) Dump current timeout of each thread
Could be used to find yout when thread will wake up
3) Dump human friendly thread state
4) Use shell_prin instead shell_fprintf
Signed-off-by: Pavlo Hamov <pavlo_hamov@jabil.com>
The current implementation does not return the low 32 bits of
k_uptime_get() as suggested by it's documentation; it returns the number
of milliseconds represented by the low 32-bits of the underlying system
clock. The truncation before translation results in discontinuities at
every point where the system clock increments bit 33.
Reimplement it using the full-precision value, and update the
documentation to note that this variant has little value for
long-running applications.
Closes#18739.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The reason we decide to ignore it in code coverage:
1.No test case can cover the function for code coverage.
2.Even if we added a test for testing, it would be marked as
"never be called by other code" because the function cause
CPU halted and it can't return.
Signed-off-by: Peng Su <peng.su@intel.com>
The mutex locking was written to use k_sched_lock(), which doesn't
work as a synchronization primitive if there is another CPU running
(it prevents the current CPU from preempting the thread, it says
nothing about what the others are doing).
Use the pre-existing spinlock for all synchronization. One wrinkle is
that the priority code was needing to call z_thread_priority_set(),
which is a rescheduling call that cannot be called with a lock held.
So that got split out with a low level utility that can update the
schedule state but allow the caller to defer yielding until later.
Fixes#17584
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This needs further design work due to problems with logging
C strings. Just send always to printk() for now until this
is resolved.
Fixes: #18052
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The scheduler lock is a nestable lock. Unlocking a nested,
still-having, lock shouldn't preempt the current thread.
k_sched_lock();
k_sched_lock();
k_sched_unlock(); /* <--- this shouldn't be a scheduling point */
k_sched_unlock(); /* <--- this is a scheduling point */
This commit changes the preempt_ok argument from 1 to 0. This let
should_preempt() check whether it should preempt at the point or not.
This fixes#17869.
Signed-off-by: Yasushi SHOJI <y-shoji@ispace-inc.com>
The current API was assuming too much, in that it expected that
arch-specific memory domain configuration is only maintained
in some global area, and updates to domains that are not currently
active have no effect.
This was true when all memory domain state was tracked in page
tables or MPU registers, but no longer works when arch-specific
memory management information is stored in thread-specific areas.
This is needed for: #13441#13074#15135
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some options like stack canaries use more stack space,
and on x86 this is not quite enough for ztest's main
thread stack to be 512 bytes.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Populate thread->stack_obj earlier in the thread initialization
process such that it is set when z_new_thread() is called.
There was nothing specific about its position, or the rest of
the code in that CONFIG_USERSPACE block, so just move it all up..
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
With the upcoming riscv64 support, it is best to use "riscv" as the
subdirectory name and common symbols as riscv32 and riscv64 support
code is almost identical. Then later decide whether 32-bit or 64-bit
compilation is wanted.
Redirects for the web documentation are also included.
Then zephyrbot complained about this:
"
New files added that are not covered in CODEOWNERS:
dts/riscv/microsemi-miv.dtsi
dts/riscv/riscv32-fe310.dtsi
Please add one or more entries in the CODEOWNERS file to cover
those files
"
So I assigned them to those who created them. Feel free to readjust
as necessary.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The `next_timeout()` function used to call the `elapsed()` function
directly in the `MAX` macro call. This caused the `elapsed()` function
to be executed twice, with possible different results, if the system
clock incremented its value in a meantime.
As a result, the whole `MAX(0, to->dticks - elapsed()` expresion could
return an incorrect value of -1, which represents the K_FOREVER timeout.
This led to a stall in devices running tickless kernel (as observed on
nRF52840).
Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
Any fatal error will print "ZEPHYR FATAL ERROR" now, so
we don't have to maintain a set of strings in the
sanitycheck harness.py
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is now called z_arch_esf_t, conforming to our naming
convention.
This needs to remain a typedef due to how our offset generation
header mechanism works.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We introduce a new z_fatal_print() API and replace all
occurrences of exception handling code to use it.
This routes messages to the logging subsystem if enabled.
Otherwise, messages are sent to printk().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* z_NanoFatalErrorHandler() is now moved to common kernel code
and renamed z_fatal_error(). Arches dump arch-specific info
before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
and renamed k_sys_fatal_error_handler(). It is now much simpler;
the default policy is simply to lock interrupts and halt the system.
If an implementation of this function returns, then the currently
running thread is aborted.
* New arch-specific APIs introduced:
- z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
zero slice_ticks when can't time slice so that next_timeout will
ignore slice_ticks of _current_cpu and system can stay low power
state longer time.
Fixes: #17368.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
This commit adds a DTCM (Device Tightly Coupled Memory) section for
Cortex F7 MCUs. The Address and length is defined in the corresponding
device tree file.
Signed-off-by: Alexander Wachter <alexander.wachter@student.tugraz.at>
Hitting wundef in kernel_structs.h, switching to match other instances
where #ifdef is used instead of #if
Signed-off-by: Nicholas Lowell <nlowell@lexmark.com>
On SMP systems, currently scheduled threads are not in the run queue
and can't be unconditionally removoed/added.
Fixes#17170
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The implementation of z_impl_float_disable was missplaced
inside the #ifdef SPIN_VALIDATE. Fixing it.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The accounting data stored at the beginning of a memory block used by
malloc must push the returned memory address to a word boundary. This
is already the case on 32-bit systems, but not on 64-bit systems where
e.g. struct k_mem_block_id still has a size of 4.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The scheduler API has always allowed setting a zero slice size as a
way to disable timeslicing. But the workaround introduced for
CONFIG_SWAP_NONATOMIC forgot that convention, and was calling
reset_time_slice() with that zero value (i.e. requesting an immediate
interrupt) in circumstances where z_swap() had been interrupted
nonatomically.
In practice, this never happened. And if it did, it was a single
spurious no-op interrupt that no one cared about. Until it did,
anyway...
Now that ticks on nRF devices are at full 32 kHz speed, we can get
into a situation where the rapidly triggering timeslice interrupts are
interrupting z_swap() calls, and the process feeds back on itself and
becomes self-sustaining.
Put that test into the time slice code itself to prevent this kind of
mistake in the future.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When tickless is available, all existing devices can handle much
higher timing precision than 10ms. A 10kHz default seems acceptable
without introducing too much range limitation (rollover for a signed
time delta will happen at 2.5 days). Leave the 100 Hz default in
place for ticked configurations, as those are going to be special
purpose usages where the user probably actually cares about interrupt
rate.
Note that the defaulting logic interacts with an obscure trick:
setting the tick rate to zero would indicate "no clock exists" to the
configuration (some platforms use this to drop code from the build).
But now that becomes a kconfig cycle, so to break it we expose
CONFIG_SYS_CLOCK_EXISTS as an app-defined tunable and not a derived
value from the tick rate. Only one test actually did this.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
System call arguments are indexed from 1 to 6, so arg0
is corrected to arg1 in two occasions. In addition, the
ARM function for system calls is now called z_arm_do_syscall,
so we update the inline comment in __svc handler.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This mechanism had multiple problems:
- Missing parameter documentation strings.
- Multiple calls to k_thread_name_set() from user
mode would leak memory, since the copied string was never
freed
- k_thread_name_get() returns memory to user mode
with no guarantees on whether user mode can actually
read it; in the case where the string was in thread
resource pool memory (which happens when k_thread_name_set()
is called from user mode) it would never be readable.
- There was no test case coverage for these functions
from user mode.
To properly fix this, thread objects now have a buffer region
reserved specifically for the thread name. Setting the thread
name copies the string into the buffer. Getting the thread name
with k_thread_name_get() still returns a pointer, but the
system call has been removed. A new API k_thread_name_copy()
is introduced to copy the thread name into a destination buffer,
and a system call has been provided for that instead.
We now have full test case coverge for these APIs in both user
and supervisor mode.
Some of the code has been cleaned up to place system call
handler functions in proximity with their implementations.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
There's no need for a system call for this; futexes live in
user memory and the initialization bit is ignored.
It's sufficient to just do an atomic_set().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is an oddball API. It's untested. In fact testing its proper
behavior requires very elaborate automation (you need a device outside
the Zephyr hardware to measure real world time, and a mechanism for
getting the device into and out of idle without using the timer
driver). And this makes for needless difficulty managing code
coverage metrics.
It was always just a hint anyway. Mark the old API deprecated and
replace it with a kconfig tunable. The effect of that is just to
change the timeout value passed to the timer driver, where we can
manage code coverage metrics more easily (only one driver cares to
actually support this feature anyway).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
move misc/stack.h to debug/stack.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/gcov.h to debug/gcov.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/util.h to sys/util.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/slist.h to sys/slist.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/sflist.h to sys/sflist.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/rb.h to sys/rb.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/mutex.h to sys/mutex.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/math_extras.h to sys/math_extras.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/libc-hooks.h to sys/libc-hooks.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/dlist.h to sys/dlist.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/__assert.h to sys/__assert.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move entropy.h to drivers/entropy.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move sys_io.h to sys/sys_io.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move power.h to power/power.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move atomic.h to sys/atomic.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The first word is used as a pointer, meaning it is 64 bits on 64-bit
systems. To reserve it, it has to be either a pointer, a long, or an
intptr_t. Not an int nor an u32_t.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
While fixing the ASSERT expressions in mem_domain.c to use
%lx instead of %x for uintptr_t variables, commit
f32330b22c has overlooked
one ASSERT expression specific to ARMv8-M. This causes
printk compilation warnings for ARMv8-M builds, so we
provide a fix here.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Compilers (at least gcc and clang) already provide definitions to
create standard types and their range. For example, __INT16_TYPE__ is
normally defined as a short to be used with the int16_t typedef, and
__INT16_MAX__ is defined as 32767. So it makes sense to rely on them
rather than hardcoding our own, especially for the fast types where
the compiler itself knows what basic type is best.
Using compiler provided definitions makes even more sense when dealing
with 64-bit targets where some types such as intptr_t and size_t must
have a different size and range. Those definitions are then adjusted
by the compiler directly.
However there are two cases for which we should override those
definitions:
* The __INT32_TYPE__ definition on 32-bit targets vary between an int
and a long int depending on the architecture and configuration.
Notably, all compilers shipped with the Zephyr SDK, except for the
i586-zephyr-elfiamcu variant, define __INT32_TYPE__ to a long int.
Whereas, all Linux configurations for gcc, both 32-bit and 64-bit,
always define __INT32_TYPE__ as an int. Having variability here is
not welcome as pointers to a long int and to an int are not deemed
compatible by the compiler, and printing an int32_t defined with a
long using %d makes the compiler to complain, even if they're the
same size on 32-bit targets. Given that an int is always 32 bits
on all targets we might care about, and given that Zephyr hardcoded
int32_t to an int before, then we just redefine __INT32_TYPE__ and
derrivatives to an int to keep the peace in the code.
* The confusion also exists with __INTPTR_TYPE__. Looking again at the
Zephyr SDK, it is defined as an int, even even when __INT32_TYPE__ is
initially a long int. One notable exception is i586-zephyr-elf where
__INTPTR_TYPE__ is a long int even when using -m32. On 64-bit targets
this is always a long int. So let's redefine __INTPTR_TYPE__ to always
be a long int on Zephyr which simplifies the code, works for both
32-bit and 64-bit targets, and mimics what the Linux kernel does.
Only a few print format strings needed adjustment.
In those two cases, there is a safeguard to ensure the type we're
enforcing has the right size and fail the build otherwise.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Move internal and architecture specific headers from include/drivers to
subfolder for timer:
include/drivers/timer
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
A k_futex is a lightweight mutual exclusion primitive designed
to minimize kernel involvement. Uncontended operation relies
only on atomic access to shared memory. k_futex structure lives
in application memory. And when using futexes, the majority of
the synchronization operations are performed in user mode. A
user-mode thread employs the futex wait system call only when
it is likely that the program has to block for a longer time
until the condition becomes true. When the condition comes true,
futex wake operation will be used to wake up one or more threads
waiting on that futex.
This patch implements two futex operations: k_futex_wait and
k_futex_wake. For k_futex_wait, the comparison with the expected
value, and starting to sleep are performed atomically to prevent
lost wake-ups. If different context changed futex's value after
the calling use-mode thread decided to block himself based on
the old value, the comparison will help observing the value
change and will not start to sleep. And for k_futex_wake, it
will wake at most num_waiters of the waiters that are sleeping
on that futex. But no guarantees are made on which threads are
woken, that means scheduling priority is not taken into
consideration.
Fixes: #14493.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
The block alignment must be enforced for statically allocated slabs
as well as runtime initialized ones. It is best to implement this
check only once in create_free_list() which is invoked by both
k_mem_slab_init() and init_mem_slab_module(), where pointers are about
to be set for the first time. It is then unnecessary to perform this
test on every slab allocation as the alignment won't change at that
point.
And not only the block size needs to be aligned, but the buffer
as well.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Found a few annoying typos and figured I better run script and
fix anything it can find, here are the results...
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Zephyr has two unrelated build _VERSIONs: KERNEL_VERSION and
BUILD_VERSION. Prefix them slightly differently in BOOT_BANNER so anyone
can instantly zoom in on which one is being used without having to
compare the implementation details of both.
Signed-off-by: Marc Herbert <marc.herbert@intel.com>