The syscall exception frame was stored on the CPU struct during
syscall execution, but that's not right. System calls might "feel
like" exceptions, but they're actually perfectly normal kernel mode
code and can be preempted and migrated between CPUs at any time.
Put the field on the thread struct.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The code underneath z_fatal_error() (which is usually run in an
exception context, but is not required to be) was running with
interrupts enabled, which is a little surprising.
The only bug present currently is that the CPU ID extracted for
logging is subject to a race (i.e. it's possible but very unlikely
that such a handler might migrate to another CPU after the error is
flagged and log the wrong CPU ID), but in general users with custom
error handlers are likely to be surprised when their dying threads
gets preempted by other code before they can abort.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Use of the _current_cpu pointer cannot be done safely in a preemptible
context. If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.
Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).
Note that the resulting _current expression now requires locking and
is going to be somewhat slower. Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.
Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing stack_analyze APIs had some problems:
1. Not properly namespaced
2. Accepted the stack object as a parameter, yet the stack object
does not contain the necessary information to get the associated
buffer region, the thread object is needed for this
3. Caused a crash on certain platforms that do not allow inspection
of unused stack space for the currently running thread
4. No user mode access
5. Separately passed in thread name
We deprecate these functions and add a new API
k_thread_stack_space_get() which addresses all of these issues.
A helper API log_stack_usage() also added which resembles
STACK_ANALYZE() in functionality.
Fixes: #17852
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These arch_timing_ defines get used in certain timer
drivers and need to be in the public include space,
and not the private kernel headers.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These got dropped by an earlier patch, but are required on SMP systems
so synchronously notify other CPUs of changed scheduler state.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This got clobbered by commit adac4cbafa in what I think was a rebase
mistake. Without it, on SMP systems it's possible to select a new
_current thread and try to return into it before another CPU has
actually finished switching away from it.
Interestingly: the frequency with which this bug got caught once it
was reintroduced was much, much higher than it was when it was fixed
the first time due to the instruction pointer poisoning introduced in
the interrim. Incompletely saved threads now have deliberately broken
state when assertions are enabled and will panic synchronously.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Similar to the suspend refactoring earlier, this really nees to be
done in an atomic block. There were two confirmable races here,
though it's not completely clear either was being hit in practice:
1. The bit operations in z_mark_thread_as_started() aren't atomic so
it needs to be protected.
2. The intermediate state in z_ready_thread() could result in a dead
or suspended thread being added to the ready queue if another
context tried a simultaneous abort or suspend.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Kernel wait_q's and the thread pended_on backpointer are scheduler
state and need to be modified under the scheduler lock. There was one
spot in pend() where they were not.
Also unpack z_remove_thread_from_ready_q() into an unsynchronized
utility so that it can be called by this process in a single lock
block.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This had the same race that queue did: you have to be 100% done with
state management before calling z_ready_thread(), because another CPU
can pick up the thread before the return value was set.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Under SMP, when a thread is marked aborting, this thread may still
be running on another CPU. However, if there is only one thread
available to run, this thread may be selected to run again due to
next_up() not checking for the aborting state. Moreover, when
there is no IPI to signal to others k_thread_abort() being called,
the k_thread_abort() target thread is marked dead after a new
thread is selected to run. This causes the original thread calling
k_thread_abort() to mistaken that target thread is no longer
running and returns.
Note that, with working IPI, z_sched_ipi() is called as an ISR
to mark the target thread dead. A new thread is then selected to
run, so that the target thread would not be selected due to it
being dead.
This moves the code to mark thread dead into next_up(), where
the next best thread is selected, and the current thread being
swapped out. z_sched_ipi() now becomes an empty function, and
calls to it are removed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only dump data when we are interested in the analysing coverage. By
default just collect the data.
CONFIG_COVERAGE_DUMP is used to control this behaviour.
This will help speed up sanitycheck and will avoid lots of noise in the
log when some tests with coverage enabled failed. Dumping data to
console is also suspected to be one of the reason why qemu hangs in CI.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
_THREAD_PRESTART means the thread was not started yet and is being
setup, for example this is the case when starting a thread with a
timeout. We do not have a 'restart' thread state.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Before C sources can be compiled any generated header that they
include must be generated. Currently, the target 'offsets_h' happens
to depend directly or indirectly on all generated headers.
This means that to compile safely, one can simply depend on
'offsets_h'. But this is coincidental and might not be true in the
future.
To be able to safely depend on a target that represents all generated
headers being ready we introduce the target
'zephyr_generated_headers'.
Any third-party build scripts can now safely depend on
'zephyr_generated_headers' and be protected from any internal changes
to the build system, like the removal of offsets_h.
Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
Fixes an issue where calling z_thread_malloc() would
borrow the resource pool of whatever thread happened
to be interrupted at the time.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Device config structure is placed in rom section but there was
no const prefix used. Lack of prefix suggested that structure
is in ram (ram_report is also fooled). Added const prefix to
explicitly inform that it goes to rom.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Calling z_ready_thread() means the thread is now ready and can wake up
at any moment on another CPU. But we weren't finished setting the
return value! So the other side could wake up with a spurious "error"
condition if it ran too soon. Note that on systems with a working
IPI, that wakeup can happen much faster than you might think.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
On SMP, there is an inherent race when swapping: the old thread adds
itself back to the run queue before calling into the arch layer to do
the context switch. The former is properly synchronized under the
scheduler lock, and the later operates with interrupts locally
disabled. But until somewhere in the middle of arch_switch(), the old
thread (that is in the run queue!) does not have complete saved state
that can be restored.
So it's possible for another CPU to grab a thread before it is saved
and try to restore its unsaved register contents (which are garbage --
typically whatever state it had at the last interrupt).
Fix this by leveraging the "swapped_from" pointer already passed to
arch_switch() as a synchronization primitive. When the switch
implementation writes the new handle value, we know the switch is
complete. Then we can wait for that in z_swap() and at interrupt
exit.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
It's possible for a thread to abort itself simultaneously with an
external abort from another thread. In fact in our test suite this is
a common thing, as ztest will abort its own spawend threads at the end
of a test, as they tend to be exiting on their own.
When that happens, the thread marks itself DEAD and does all its
scheduler bookeeping, but it is STILL RUNNING on its own stack until
it makes its way to its final swap. The external context would see
that "dead" metadata and return from k_thread_abort(), allowing the
next test to reuse and spawn the same thread struct while the old
context was still running. Obviously that's bad.
Unfortunately, this is impossible to address completely without
modifying every SMP architecture to add a API-visible hook to every
swap that signals completion. In practice the best we can do is add a
delay. But note the optimization: almost always, the scheduler IPI
catches the running thread and kills it from interrupt context
(i.e. on a different stack). When that happens, we know that the
interrupted thread will never be resumed (because it's dead) and can
elide the delay. We only pay the cost when we actually detect a race.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These two spots were calling z_sched_ipi() (the IPI handler run under
the ISR, which is a noop here because obviously the current thread
isn't DEAD) and not arch_sched_ipi() (which triggers an IPI on other
CPUs to inform them of scheduling state changes), presumably because
of a typo.
Apparently we don't have tests for k_wakeup() and
k_thread_priority_set() that are sensitive to latency in SMP
contexts...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme. As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.
Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Toggling this symbol probably doesn't make sense, because the
architecture is already known when Kconfig runs.
SCHED_IPI_SUPPORTED is enabled through being selected by the ARC_CONNECT
(maybe that one shouldn't be configurable either) and X86_64 symbols.
Note that it's not possible to disable the symbol when it's being
selected, so trying to turn it off on e.g. X86_64 won't work either.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
Add runtime error handling for k_msgq_cleanup. We return 0 on success
now and -EAGAIN when cleanup is not possible.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add runtime error checking to k_pipe_cleanup and k_pipe_get and remove
asserts.
Adapted test which was expecting a fault to handle errors instead.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Remove static helper functions used only once and integrate them into
calling functions.
In k_sem_take, return at the end.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Check for errors at runtime and stop depending on ASSERTs.
This changes the API for
- k_sem_init
k_sem_init now returns -EINVAL on invalid data.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_mutex_unlock will now perform error checking and return on failures.
If the current thread does not own the mutex, we will now return -EPERM.
In the unlikely situation where we own a lock and the lock count is
zero, we assert. This is considered an undefined bahviour and should not
happen.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Improve positioning of tracing calls. Avoid multiple calls and missing
events because of complex logix. Trace the event where things happen
really.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Historically, these routines were placed in thread.c and would use the
scheduler via exported, synchronized functions (e.g. "remove from
ready queue"). But those steps were very fine grained, and there were
races where the thread could be seen by other contexts (in particular
under SMP) in an intermediate state. It's not completely clear to me
that any of these were fatal bugs, but it's very hard to prove they
weren't.
At best, this is fragile. Move the z_thread_single_suspend/abort()
functions into the scheduler and do the scheduler logic in a single
critical section.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original implementation left this function hidden in init.h which
prevented it from showing up in documentation. Move it to kernel.h,
and document it consistent with the other functions that allow caller
customization based on context.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Allow caller to supply 0 events in which case the function just
does the sleep. This is useful so that the caller does not need
to create artificial events.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Same deal as in commit 41713244b3 ("kconfig: Remove '# Hidden' comments
on promptless symbols"). I forgot to do a case-insensitive search.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
We have been using thread, th and t for thread variables making the code
less readable, especially when we use t for timeouts and other time
related variables. Just use thread where possible and keep things
consistent.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Implement thread foreach processing with limited locking
to allow threads processing that may take more time but allows
missing some threads processing when the thread list is modified.
Signed-off-by: Radoslaw Koppel <radoslaw.koppel@nordicsemi.no>
SPIN_VALIDATE is, as it was previously, enabled per default when having
less than 4 CPUs and either having no flash or a flash size greater than
32kB.
Small targets, which needs to have asserts enabled, can chose to have
the spinlock validation enabled or not and thereby decide whether the
overhead added is acceptable or not.
Signed-off-by: Danny Oerndrup <daor@demant.com>
Fix a gap where k_sleep(K_FOREVER) could execute a code path that
would not verify that the call was not from interrupt context.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
64-bit systems generate some compiler warnings about
data type sizes, use uintptr_t where int/u32_t was being cast
to void *.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We need a size_t and not a u32_t for partition sizes,
for 64-bit compatibility.
Additionally, app_memdomain.h was also casting the base
address to a u32_t instead of a uintptr_t.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>