Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This patch provides support needed to get timing related
information from xtensa based SOC.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.
Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.
Fixes: #9067
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
memcpy always return a pointer to dest, it can be ignored. Just making
it explicitly so compilers will never raise warnings/errors to this.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
queue_insert will always return 0 when no memory is allocated, just
explicitly marking that we are ignoring return value in these cases.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
irq_lock returns an unsigned int, though, several places was using
signed int. This commit fix this behaviour.
In order to avoid this error happens again, a coccinelle script was
added and can be used to check violations.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There exist two symbols that became equivalent when PR #9383 was
merged; _SYSCALL_LIMIT and K_SYSCALL_LIMIT. This patch deprecates the
redundant _SYSCALL_LIMIT symbol.
_SYSCALL_LIMIT was initally introduced because before PR #9383 was
merged K_SYSCALL_LIMIT was an enum, which couldn't be included into
assembly files. PR #9383 converted it into a define, which can be
included into assembly files, making _SYSCALL_LIMIT redundant.
Likewise for _SYSCALL_BAD.
Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
Consistently use
config FOO
bool/int/hex/string "Prompt text"
instead of
config FOO
bool/int/hex/string
prompt "Prompt text"
(...and a bunch of other variations that e.g. swapped the order of the
type and the 'prompt', or put other properties between them).
The shorthand is fully equivalent to using 'prompt'. It saves lines and
avoids tricking people into thinking there is some semantic difference.
Most of the grunt work was done by a modified version of
https://unix.stackexchange.com/questions/26284/how-can-i-use-sed-to-replace-a-multi-line-string/26290#26290, but some
of the rarer variations had to be converted manually.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
The time slicing settings was kept in milliseconds while all related
operations was based on ticks. Continuous back and forth conversion
between ticks and milliseconds introduced an accumulating error due
to rounding in _ms_to_ticks() and __ticks_to_ms(). As result
configured time slice duration was not achieved.
This commit removes excessive ticks <-> ms conversion by using ticks
as time unit for all operations related to time slicing.
Also, it fixes#8896 as well as #8897.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The _update_time_slice_before_swap() function directly compared
_time_slice_duration (expressed in ms) with value returned by
_get_remaining_program_time() which used ticks as a time unit.
Moreover, the _time_slice_duration was also used as an argument
for _set_time(), which expects time expressed in ticks.
This commit ensures that the same unit (ticks) is used in
comparsion and timer adjustments.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Socket APIs pass pointers to these disguised as file descriptors.
This lets us effectively validate them.
Kernel objects now can have Kconfig dependencies specified, in case
certain structs are not available in all configurations.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The time slicing settings was kept in milliseconds while all related
operations was based on ticks. Continuous back and forth conversion
between ticks and milliseconds introduced an accumulating error due
to rounding in _ms_to_ticks() and __ticks_to_ms(). As result
configured time slice duration was not achieved.
This commit removes excessive ticks <-> ms conversion by using ticks
as time unit for all operations related to time slicing.
Also, it fixes#8896 as well as #8897.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The _update_time_slice_before_swap() function directly compared
_time_slice_duration (expressed in ms) with value returned by
_get_remaining_program_time() which used ticks as a time unit.
Moreover, the _time_slice_duration was also used as an argument
for _set_time(), which expects time expressed in ticks.
This commit ensures that the same unit (ticks) is used in
comparsion and timer adjustments.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Up until now, Zephyr has patched Kconfig to use the last 'default' with
a satisfied condition, instead of the first one. I'm not sure why the
patch was added (it predates Kconfiglib), but I suspect it's related to
Kconfig.defconfig files.
There are at least three problems with the patch:
1. It's inconsistent with how Kconfig works in other projects, which
might confuse newcomers.
2. Due to oversights, earlier 'range' properties are still preferred,
as well as earlier 'default' properties on choices.
In addition to being inconsistent, this makes it impossible to
override 'range' properties and choice 'default' properties if the
base definition of the symbol/choice already has 'range'/'default'
properties.
I've seen errors caused by the inconsistency, and I suspect there
are more.
3. A fork of Kconfiglib that adds the patch needs to be maintained.
Get rid of the patch and go back to standard Kconfig behavior, as
follows:
1. Include the Kconfig.defconfig files first instead of last in
Kconfig.zephyr.
2. Include boards/Kconfig and arch/<arch>/Kconfig first instead of
last in arch/Kconfig.
3. Include arch/<arch>/soc/*/Kconfig first instead of last in
arch/<arch>/Kconfig.
4. Swap a few other 'source's to preserve behavior for some scattered
symbols with multiple definitions.
Swap 'source's in some no-op cases too, where it might match the
intent.
5. Reverse the defaults on symbol definitions that have more than one
default.
Skip defaults that are mutually exclusive, e.g. where each default
has an 'if <some board>' condition. They are already safe.
6. Remove the prefer-later-defaults patch from Kconfiglib.
Testing was done with a Python script that lists all Kconfig
symbols/choices with multiple defaults, along with a whitelist of fixed
symbols. The script also verifies that there are no "unreachable"
defaults hidden by defaults without conditions
As an additional test, zephyr/.config was generated before and after the
change for several samples and checked to be identical (after sorting).
This commit includes some default-related cleanups as well:
- Simplify some symbol definitions, e.g. where a default has 'if FOO'
when the symbol already has 'depends on FOO'.
- Remove some redundant 'default ""' for string symbols. This is the
implicit default.
Piggyback fixes for swapped ranges on BT_L2CAP_RX_MTU and
BT_L2CAP_TX_MTU (caused by confusing inconsistency).
Piggyback some fixes for style nits too, e.g. unindented help texts.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
Kernel threads created at build time have unique indexes to map them
into various bitarrays. This patch extends these indexes to
dynamically created threads where the associated kernel objects are
allocated at runtime.
Fixes: #9081
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We now have functions for handling all the details of copying
data to/from user mode, including C strings and copying data
into resource pool allocations.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Summary: revised attempt at addressing issue 6290. The
following provides an alternative to using
CONFIG_APPLICATION_MEMORY by compartmentalizing data into
Memory Domains. Dependent on MPU limitations, supports
compartmentalized Memory Domains for 1...N logical
applications. This is considered an initial attempt at
designing flexible compartmentalized Memory Domains for
multiple logical applications and, with the provided python
script and edited CMakeLists.txt, provides support for power
of 2 aligned MPU architectures.
Overview: The current patch uses qualifiers to group data into
subsections. The qualifier usage allows for dynamic subsection
creation and affords the developer a large amount of flexibility
in the grouping, naming, and size of the resulting partitions and
domains that are built on these subsections. By additional macro
calls, functions are created that help calculate the size,
address, and permissions for the subsections and enable the
developer to control application data in specified partitions and
memory domains.
Background: Initial attempts focused on creating a single
section in the linker script that then contained internally
grouped variables/data to allow MPU/MMU alignment and protection.
This did not provide additional functionality beyond
CONFIG_APPLICATION_MEMORY as we were unable to reliably group
data or determine their grouping via exported linker symbols.
Thus, the resulting decision was made to dynamically create
subsections using the current qualifier method. An attempt to
group the data by object file was tested, but found that this
broke applications such as ztest where two object files are
created: ztest and main. This also creates an issue of grouping
the two object files together in the same memory domain while
also allowing for compartmenting other data among threads.
Because it is not possible to know a) the name of the partition
and thus the symbol in the linker, b) the size of all the data
in the subsection, nor c) the overall number of partitions
created by the developer, it was not feasible to align the
subsections at compile time without using dynamically generated
linker script for MPU architectures requiring power of 2
alignment.
In order to provide support for MPU architectures that require a
power of 2 alignment, a python script is run at build prior to
when linker_priv_stacks.cmd is generated. This script scans the
built object files for all possible partitions and the names given
to them. It then generates a linker file (app_smem.ld) that is
included in the main linker.ld file. This app_smem.ld allows the
compiler and linker to then create each subsection and align to
the next power of 2.
Usage:
- Requires: app_memory/app_memdomain.h .
- _app_dmem(id) marks a variable to be placed into a data
section for memory partition id.
- _app_bmem(id) marks a variable to be placed into a bss
section for memory partition id.
- These are seen in the linker.map as "data_smem_id" and
"data_smem_idb".
- To create a k_mem_partition, call the macro
app_mem_partition(part0) where "part0" is the name then used to
refer to that partition. This macro only creates a function and
necessary data structures for the later "initialization".
- To create a memory domain for the partition, the macro
app_mem_domain(dom0) is called where "dom0" is the name then
used for the memory domain.
- To initialize the partition (effectively adding the partition
to a linked list), init_part_part0() is called. This is followed
by init_app_memory(), which walks all partitions in the linked
list and calculates the sizes for each partition.
- Once the partition is initialized, the domain can be
initialized with init_domain_dom0(part0) which initializes the
domain with partition part0.
- After the domain has been initialized, the current thread
can be added using add_thread_dom0(k_current_get()).
- The code used in ztests ans kernel/init has been added under
a conditional #ifdef to isolate the code from other tests.
The userspace test CMakeLists.txt file has commands to insert
the CONFIG_APP_SHARED_MEM definition into the required build
targets.
Example:
/* create partition at top of file outside functions */
app_mem_partition(part0);
/* create domain */
app_mem_domain(dom0);
_app_dmem(dom0) int var1;
_app_bmem(dom0) static volatile int var2;
int main()
{
init_part_part0();
init_app_memory();
init_domain_dom0(part0);
add_thread_dom0(k_current_get());
...
}
- If multiple partitions are being created, a variadic
preprocessor macro can be used as provided in
app_macro_support.h:
FOR_EACH(app_mem_partition, part0, part1, part2);
or, for multiple domains, similarly:
FOR_EACH(app_mem_domain, dom0, dom1);
Similarly, the init_part_* can also be used in the macro:
FOR_EACH(init_part, part0, part1, part2);
Testing:
- This has been successfully tested on qemu_x86 and the
ARM frdm_k64f board. It compiles and builds power of 2
aligned subsections for the linker script on the 96b_carbon
boards. These power of 2 alignments have been checked by
hand and are viewable in the zephyr.map file that is
produced during build. However, due to a shortage of
available MPU regions on the 96b_carbon board, we are unable
to test this.
- When run on the 96b_carbon board, the test suite will
enter execution, but each individaul test will fail due to
an MPU FAULT. This is expected as the required number of
MPU regions exceeds the number allowed due to the static
allocation. As the MPU driver does not detect this issue,
the fault occurs because the data being accessed has been
placed outside the active MPU region.
- This now compiles successfully for the ARC boards
em_starterkit_em7d and em_starterkit_em7d_v22. However,
as we lack ARC hardware to run this build on, we are unable
to test this build.
Current known issues:
1) While the script and edited CMakeLists.txt creates the
ability to align to the next power of 2, this does not
address the shortage of available MPU regions on certain
devices (e.g. 96b_carbon). In testing the APB and PPB
regions were commented out.
2) checkpatch.pl lists several issues regarding the
following:
a) Complex macros. The FOR_EACH macros as defined in
app_macro_support.h are listed as complex macros needing
parentheses. Adding parentheses breaks their
functionality, and we have otherwise been unable to
resolve the reported error.
b) __aligned() preferred. The _app_dmem_pad() and
_app_bmem_pad() macros give warnings that __aligned()
is preferred. Prior iterations had this implementation,
which resulted in errors due to "complex macros".
c) Trailing semicolon. The macro init_part(name) has
a trailing semicolon as the semicolon is needed for the
inlined macro call that is generated when this macro
expands.
Update: updated to alternative CONFIG_APPLCATION_MEMORY.
Added config option CONFIG_APP_SHARED_MEM to enable a new section
app_smem to contain the shared memory component. This commit
seperates the Kconfig definition from the definition used for the
conditional code. The change is in response to changes in the
way the build system treats definitions. The python script used
to generate a linker script for app_smem was also midified to
simplify the alignment directives. A default linker script
app_smem.ld was added to remove the conditional includes dependency
on CONFIG_APP_SHARED_MEM. By addining the default linker script
the prebuild stages link properly prior to the python script running
Signed-off-by: Joshua Domagalski <jedomag@tycho.nsa.gov>
Signed-off-by: Shawn Mosley <smmosle@tycho.nsa.gov>
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.
In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.
The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Define _sys_soc_resume() only if CONFIG_SYS_POWER_LOW_POWER_STATE
is enabled.
Define _sys_soc_resume_from_deep_sleep() only if
CONFIG_SYS_POWER_DEEP_SLEEP is enabled.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Simplify k_thread_foreach API conditional inclusion by putting
the whole logic under CONFIG_THREAD_MONITOR config option.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Minor improvement in the help text description of Kconfig option
SYS_CLOCK_HW_CYCLES_PER_SEC, clarifying that the option can be
defined in either SOC or Board Kconfig file.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Log API can be used before user can explicitly initialize the logger.
In order to ensure that logger core is ready to buffer log messages
it must be initialize as early as possible. Initialization does not
include initialization of default backend since driver may not be
ready and backend is needed only when log messages are processed.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
This commit moves all implementations of the _ms_to_ticks() into
single file. Also, the function is now inline even if
_NEED_PRECISE_TICK_MS_CONVERSION is defined.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Zephyr 1.12 removed the old scheduler and replaced it with the choice
of a "dumb" list or a balanced tree. But the old multi-queue
algorithm is still useful in the space between these two (applications
with large-ish numbers of runnable threads, but that don't need fancy
features like EDF or SMP affinity). So add it as a
CONFIG_SCHED_MULTIQ option.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Make these "choice" items instead of a single boolean that implies the
element unset.
Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Uncovered by clang we have some functions being only used conditionally,
so gaurd them to make them only available when those conditions are met.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Bool symbols implicitly default to 'n'.
A 'default n' can make sense e.g. in a Kconfig.defconfig file, if you
want to override a 'default y' on the base definition of the symbol. It
isn't used like that on any of these symbols though.
Also simplify the definitions of COOP_ENABLED, PREEMPT_ENABLED, and
SYS_CLOCK_EXISTS. 'default' (and def_bool) can take any expression, not
just a fixed value.
(It would work without the parentheses around the comparisons too.)
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
The prepare_multithreading()/switch_to_main_thread() steps were being
done unconditionally, when with multhreading disabled we want to jump
straight into the main thread on the existing stack.
Needless to say, that doesn't work well. Fixes#8361.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state". It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.
Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll. The disadvantage is the 4
bytes of thread space needed. Advantages:
+ Cleaner API, it's now internal to poll instead of being globally
visible.
+ The thread_state bit space is just one byte, and was almost full
already.
+ Smaller code to write/test a full word and not a bitfield
+ Words are atomic, so no need for one of irq lock/unlock pairs.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The queue loop when CONFIG_POLL is in used has an inherent race
between the return of k_poll() and the inspection of the list where no
lock can be held. Other contending readers of the same queue can
sneak in and steal the item out of the list before the current thread
gets to the sys_sflist_get() call, and the current loop will (if it
has a timeout) spuriously return NULL before the timeout expires.
It's not even a hard race to exercise. Consider three threads at
different priorities: High (which can be an ISR too), Mid, and Low:
1. Mid and Low both enter k_queue_get() and sleep inside k_poll() on
an empty queue.
2. High comes along and calls k_queue_insert(). The queue code then
wakes up Mid, and reschedules, but because High is still running Mid
doesn't get to run yet.
3. High inserts a SECOND item. The queue then unpends the next thread
in the list (Low), and readies it to run. But as before, it won't
be scheduled yet.
4. Now High sleeps (or if it's an interrupt, exits), and Mid gets to
run. It dequeues and returns the item it was delivered normally.
5. But Mid is still running! So it re-enters the loop it's sitting in
and calls k_queue_get() again, which sees and returns the second
item in the queue synchronously. Then it calls it a third time and
goes to sleep because the queue is empty.
6. Finally, Low wakes up to find an empty queue, and returns NULL
despite the fact that the timeout hadn't expired.
The fix is simple enough: check the timeout expiration inside the loop
so we don't return early.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Default value of CONFIG_SYSTEM_WORKQUEUE_PRIORITY is -1, which means
it's run by the cooperative thread. Explicitly mention (in the Kconfig
help) that it means that any work handler submited to this default
queue won't be preempted by some other thread (which is generally
good, but worth documenting explicitly).
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.
This is problematic:
- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.
Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We are using _is_thread_prevented_from_running() to see if the
_current thread can be preempted in should_preempt(). The idea
being that even if the _current thread is a high priority coop
thread, we can still preempt it when it's pending, suspended,
etc.
This does not take into account if the thread is sleeping.
k_sleep() merely removes the thread from the ready_q and calls
Swap(). The scheduler will swap away from the thread temporarily
and then on the next cycle get stuck to the sleeping thread for
however long the sleep timeout is, doing exactly nothing because
other functions like _ready_thread() use _is_thread_ready() as a
check before proceeding.
We should use !_is_thread_ready() to take into account when threads
are waiting on a timer, and let other threads run in the meantime.
Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
All other checks of thread_state use a bit wise & operator incase
there are other flags attached to the thread_state. Let's fix
the only outlier in _check_stack_sentinel() to be the same.
Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
The should_preempt() code was catching some of the "unrunnable" cases
but not all of them, opening the possibility of failing to preempt a
just-pended thread and thus waking it up synchronously. There are
reports of this causing spin loops over k_poll() in the network stack
work queues (see #8049).
Note that the previous _is_dummy() call is folded into (the somewhat
verbosely named) _is_thread_prevented_from_running(), and that the
order of tests has been changed/optimized to hopefully catch common
cases earlier.
Suggested-by: Michael Scott <michael@opensourcefoundries.com>
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes post-scheduler-rewrite broke scheduling on SMP:
The "preempt_ok" feature added to isolate preemption points wasn't
honored in SMP mode. Fix this by adding a "swap_ok" field to the CPU
record (not the thread) which is set at the same time out of
update_cache().
The "queued" flag wasn't being maintained correctly when swapping away
from _current (it was added back to the queue, but the flag wasn't
set).
Abstract out a "should_preempt()" predicate so SMP and uniprocessor
paths share the same logic, which is distressingly subtle.
There were two places where _Swap() was predicated on
_get_next_ready_thread() != _current. That's no longer a benign
optimization in SMP, where the former function REMOVES the next thread
from the queue. Just call _Swap() directly in SMP, which has a
unified C implementation that does this test already. Don't change
other architectures in case it exposes bugs with _Swap() switching
back to the same thread (it should work, I just don't want to break
anything).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The sys_mem_pool implementation has a subtle error case where it
detected a simultaneous allocation after having released the lock, in
which case exactly one of the racing allocators will return with
-EAGAIN (the other one suceeds of course).
I documented this condition at the lower level, but forgot to actually
handle it at the k_mem_pool level where we want to retry once before
going to sleep, as it doesn't generally represent an empty heap. It
got caught by code auditing in:
https://github.com/zephyrproject-rtos/zephyr/issues/6757
(Full disclosure: I tested this by whiteboxing the first failure. I
wasn't able to put together a rig to reliably exercise the actual
race.)
This patch also fixes a noop thinko in the return logic in the same
function, which contained:
(ret == -EAGAIN) || (ret && ret != -ENOMEM)
The first term is needless and implied by the second.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The metairq feature exposed the fact that all of our arch code (and a
few mistaken spots in the scheduler too) was trying to interpret
"preemptible" threads independently.
As of the scheduler rewrite, that logic is entirely within sched.c and
doing it externally is redundant. And now that "cooperative" threads
can be preempted, it's wrong and produces test failures when used with
metairq threads.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
During the early boot process, in prepare_multithreading(), the kernel
structures and scheduler are not ready yet. In order to obtain entropy
for early works such as stack randomization, optionally use when present
the ISR-specific function that some drivers will provide.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>