The alignment fix on struct device definitions should be done to all
such linker list tricks. Let's abstract the declaration plus alignment
with a macro and apply it to all concerned cases.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The fifo/lifo API is implemented on top of the queue API with macros
that blindly force a cast to struct k_queue. Providing a reference to
the _queue member from the k_fifo structure is much cleaner as it let
the compiler perform pointer type checking. Generated code is identical.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Architectures that lack implementations of synchronous traps (via
Z_ARCH_EXCEPT()) end up using a z_except_reason() implementation that
doesn't actually trap at all. It just invokes
z_NanoFatalErrorHandler() in the current thread context.
That has two problems:
First, it was just blindly assuming that the error handling invoked
would abort the current thread, swap away, and never return. But that
can be application code in z_SysFatalErrorHandler that we can't
control.
Second, it was too broad with this assumption and stuff a
CODE_UNREACHABLE hint in for the compiler. But in fact
z_except_reason() may be invoked in interrupt context (for example the
stackprot check) where it may NOT swap away and WILL return
synchronously from the call. This doesn't seem to have caused a
miscompilation in production code, but it made a total voodoo hash out
of my debugging around this macro for an hour or so until I figured
out why my logging was being optimized out.
Do the abort unconditionally instead of relying on the app, and remove
the incorrect compiler hint.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add k_usleep() API, analogous to k_sleep(), excepting that the argument
is in microseconds rather than milliseconds.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
k_poll_signal_raise() returns an error code to indicate that the raise
was too late to notify an expiring poll. Make clear that this does not
mean that the signal was lost: a subsequent poll will find it and expire
immediately.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.
As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.
The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269Fixes: #14766
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Rename reserved function names in drivers/ subdirectory. Update
function macros concatenatenating function names with '##'. As
there is a conflict between the existing gpio_sch_manage_callback()
and _gpio_sch_manage_callback() names, leave the latter unmodified.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
This is used to have each arch canonically state how much
room in the stack object is reserved for non-thread use.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is a trivial change to satisfy C++, which requires that designated
initializers appear in the same order as the members they initialize.
Fixes: #14540
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
There was a detected user error in the code where racing insertions of
k_delayed_work items into different queues would be detected and
flagged as an error (honestly I don't see much value there -- Zephyr
doesn't as a general rule protect against errors like this, and
work_q's are inherently kernel things that don't require
userspace-style checking).
This got broken with spinlockification, where each work_q object got
its own lock, so the single lock wouldn't protect against the other
insert function any more. As it happens, that was needless. The core
synchronization on a work_q is in the internal k_queue object anyway
-- the lock in this file was only ever used for (very fast,
noncontending) delayed work insertion. So go back to a global lock to
preserve the original behavior.
Fixes#14104
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
You can't cancel what hasn't been submitted. Clarification added
following minor bike shed in github. Fixes#14105
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Nothing in the code actually returns -EINPROGRESS, and in the case of
k_work_init() I don't see how that can even be done in a reliable way.
Don't claim we do what we don't. Fixes#14109.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In some circumstances (e.g., a tickless kernel), k_timer_remaining_get()
would not account for time passed that didn't involve clock interrupts.
This adds a simple fix for that, and adds a test case. In addition, the
return value of k_timer_remaining_get() is clamped at 0 in the case of
overdue timers and the API description is adjusted to reflect this.
Fixes: #13353
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
One spinlock per pipe object. Also removed some vestigial locking
around _ready_thread(). That call is internally synchronized now.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Straightforward port. Each struct k_queue object gets a spinlock to
control obvious data ownership.
Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock. But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled. The spinlock API detects that as a recursive lock when
asserts are enabled.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Each work_q object gets a separate spinlock to synchronize access
instead of the global lock. Note that there was a recursive lock
condition in k_delayed_work_cancel(), so that's been split out into an
internal unlocked version and the API entry point that wraps it with a
lock.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.
To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
While k_uptime_get() and k_uptime_get32() return time in
milliseconds, they don't need to have millisecond resolution.
Resolution with default Zephyr settings in 10ms.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
This adds a simple implementation of SMP CPU affinity to Zephyr. The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.
Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway). Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.
The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Added cpu_idle APIs to a doxygen group, otherwise there were missing
from the project documentation.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state. This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.
Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.
Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Although sys_dnode_t and sys_dlist_t are aliases, their roles are
different and they appear in different positions in dlist API calls.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Zero length array is a GNU extension that works as an header for a
variable length object. The portable solution for this is using
flexible length array, but this can be used only in the end of a
struct declaration and this is violates MISRA-C rule 18.8.
The easiest way to rif of this is make the macro expand to nothing but
then we will have a trailing semicolon that is not allowed in C99. So
the macro was changed to automatically add the semicolon when needed.
This may break code identation in some editors but it is a fair price
to pay to have portability and compliance.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This API was using variable number of arguments. Which is not
allowed according to misra c guidelines(Rule 17.1). Hence making
this API into a macro and using the util macro FOR_EACH_FIXED_ARG
to get the same functionality.
There is one deviation from the old function. The last argument
shouldn't be NULL.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Fix misspellings in documentation (.rst, Kconfig help text, and .h
doxygen API comments), missed during regular reviews.
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.
MISRA-C rule 8.2
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This commit exposes k_mem_partition_attr_t outside User Mode, so
we can use struct k_mem_partition for defining memory partitions
outside the scope of user space (for example, to describe thread
stack guards or no-cacheable MPU regions). A requirement is that
the Zephyr build supports Memory protection. To signify this, a
new hidden, all-architecture Kconfig symbol is defined (MPU). In
the wake of exposing k_mem_partition_attr_t, the commit exposes
the MPU architecture-specific access permission attribute macros
outside the User space context (for all ARCHs), so they can be
used in a more generic way.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
MEM_PARTITION_ENTRY is problematic, as it assumes that
struct k_mem_partition contains a k_mem_partition_attr_t
field, which is only true if Memory Protection is supported.
Additionally, it works with k_mem_partition_attr_t being a
single element object (scalar or single element structure).
This commit removes the macro function and updates macro
K_MEM_PARTITION_DEFINE() (MEM_PARTITION_ENTRY has only been
used in that macro function definition).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This allows for workqueues to be started in user mode.
No additional kernel objects or system calls are defined
other than starting the workqueue in user mode; for
permission purposes the embedded queue and thread objects
are sufficient.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
There's no current need for this and it makes work items
declared with K_WORK_DEFINE() inaccessible to user mode.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_work and k_work_q are not kernel objects, nor will they
be. k_work_q contains some kernel objects which are tracked
independently.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add an API to peek into a message queue and read the first message
without removing the message from the queue.
Signed-off-by: Sathish Kuttan <sathish.k.kuttan@intel.com>
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_poll_signal was being used by both, struct and function. Besides
this being extremely error prone it is also a MISRA-C violation.
Changing the function to contain a verb, since it performs an action
and the struct will be a noun. This pattern must be formalized and
followed and across the project.
MISRA-C rules 5.7 and 5.9
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
struct k_thread already has a pointer type k_tid_t, there is no need for
this definition to tcs.
Less symbols/names make the code cleaner and more readable.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This patch fixes few issues in queue.c. This patch also changes
the return type of k_queue_alloc_append and k_queue_alloc_prepend
from int to s32_t.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
This commit introduces k_sleep() return value, which provides
information about actual sleep time. If the returned value is
not-zero, the thread slept shorter than requested, which is
only possible if the thread has been woken up by k_wakeup() call.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Can choose the C++ standard (C++98/11/14/17/2a)
Can link with standard C++ library (libstdc++)
Add support of C++ exceptions
Add support of C++ RTTI
Add C++ options to subsys/cpp/Kconfig
Implements new and delete using k_malloc and k_free
if CONFIG_HEAP_MEM_POOL_SIZE is defined
Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
This patch removes the typecast (void*). This can be better
handled by typecasting to the actual typdef. This fixes the
misra rule of 11.6 for alert.
Part of GH-10042.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
I was pretty careful, but these snuck in. Most of them are due to
overbroad string replacements in comments. The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version. The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed. Advantages:
* Properly spinlocked and SMP-aware. The earlier timer implementation
relied on only CPU 0 doing timeout work, and on an irq_lock() being
taken before entry (something that was violated in a few spots).
Now any CPU can wake up for an event (or all of them) and everything
works correctly.
* The *_thread_timeout() API is now expressible as a clean wrapping
(just one liners) around the lower-level interface based on function
pointer callbacks. As a result the timeout objects no longer need
to store backpointers to the thread and wait_q and have shrunk by
33%.
* MUCH smaller, to the tune of hundreds of lines of code removed.
* Future proof, in that all operations on the queue are now fronted by
just two entry points (_add_timeout() and z_clock_announce()) which
can easily be augmented with fancier data structures.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).
Move it to where it belongs. Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API. And rename it to
a more standard zephyr name.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing timeout API wants to store a wait_q on which the thread
is waiting, but it only uses that value in one spot (and there only as
a boolean flag indicating "this thread is waiting on a wait_q).
As it happens threads can already store their own backpointers to a
wait_q (needed for the SCALABLE scheduler backend), so we should use
that instead.
This patch doesn't actually perform that unification yet. It
reorgnizes things such that the pended_on field is always set at the
point of timeout interaction, and adds a bunch of asserts to make 100%
sure the logic is correct. The next patch will modify the API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This flag is an indication to the timer driver that the OS doesn't
care about rollover conditions of the tick count while idling, so the
system doesn't need to wake up once per counter flip[1]. Obviously in
that circumstance values returned from k_uptime_get_32() are going to
be wrong, so the implementation had an assert to check for misuse.
But no one understood that from the docs, so the only place these APIs
were used in practice were as "guards" around code that needed to call
k_uptime_get_32(), even though that's 100% wrong per docs!
Clarify the docs. Remove the incorrect guards. Change the flag to
initialize to true so that uptime isn't broken-by-default in tickless
mode. Also move the implemenations of the functions out of the
header, as there's no good reason for these to need to be inlined.
[1] Which can be significant. A 100MHz ARM using the 24 bit SysTick
counter rolls over at about 6 Hz, and if it had to come out of
idle at that rate it would be a significant power issue that would
swamp the gains from tickless. Obviously systems with slow
counters like nRF or 64 bit ones like RISC-V or x86's TSC aren't
as affected.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The kernel.h file had a bunch of internal APIs for timeout/clock
handling mixed in. Move these to sys_clock.h, which it always
included (in a weird location, so move THAT to kernel_includes.h with
everything else).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
k_queue has k_queue_append API which does not check if the element's
address already exists. This creates a problem if the same element
address is appended to queue. This forms circular list showing
unintended behaviour for the application using queue. The proposed
API k_queue_find_and_append takes care of checking if element exists
already before appending. This API is complimentary to k_queue_remove
which checks if the queue element is present before removing.
Signed-off-by: Dhananjay Gundapu Jayakrishnan <dhananjay.jayakrishnan@proglove.de>
Macro _OBJECT_TRACING_NEXT_PTR expands to a member or to nothing.
Macro _OBJECT_TRACING_NEXT_PTR is used in a number of places, like:
struct k_stack {
.. omitted ..
_OBJECT_TRACING_NEXT_PTR(k_stack);
u8_t flags;
};
When the macro expands to nothing, a lonesome semi would remain. This is
illegal in C99, but permitted in GCC with GNU extensions.
Rather than expand to empty, we now expand to a zero-length array.
This means we can retain the trailing semis across structs wherein the
macro is used.
Note that zero-length array (foo[0]) != flexible array member (foo[]):
* zero-length array: Is GNU+Clang extension. Anywhere in struct.
* flexible array member: Is C99. Only in end of struct.
Thus we have really only traded-off one portability issue for
another, more acceptable, one at least.
Signed-off-by: Mark Ruvald Pedersen <mped@oticon.com>
Change APIs that essentially return a boolean expression - 0 for
false and 1 for true - to return a bool.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Some minor style fixes and rewording of the documentation
for ARM MPU region types.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The following 2 improvements are contained in this patch:
- When converting from ms to ticks, instead of using hardware cycles
per tick, use hardware cycles per second. This ensures that the
multiplication is done before the division, increasing precision.
- When converting from ticks to ms, instead of using cycles per tick
and cycles per sec, use ticks per sec. This too increases the
precision.
The concept is to make the dividend as large as possible compared to the
divisor in order to lose as little precision as possible.
Fixes#8898Fixes#9459Fixes#9466Fixes#9468
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.
This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.
This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).
This change also elaborates docstring for the functions mentioned, to
document this behavior.
Fixes: #9032
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.
Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.
Fixes: #9067
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Commit 2b8cf4c98e ("include: kernel: Fix documentation for
TICKLESS_KERNEL API's")' defined a macro to fix documentation when
TKCKLESS_KERNEL is not available but this macro does not return the
same the functions returns, so its use may result in compilation
error.
Another point to consider is that if one is using this function
without it be enabled is better to return a proper error like ENOTSUP
explicitly saying that this is not supported.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The value of sys_clock_ticks_per_sec is obtained using
simple integer division with rounding toward zero. As result
using this variable in _ms_to_ticks() introduces some error.
This commit eliminates sys_clock_ticks_per_sec from equation
used in _ms_to_ticks() removing introduced error.
Also, this commit fixes#8895.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Because errno.h is defined in terms of a syscall we can get into trouble
when one syscall/<FOO.h> ends up include another syscall/<BAR.h>.
Moving errno.h from kernel_includes.h to kernel.h breaks the possible
inclusion issue on some ARM platforms (which arm_mpu.h ends up include
soc.h which ends up include kernel_includes.h which would include
errno.h).
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This commit enables accurate (based on 64-bit math) tick <-> ms
conversion if system clock rate is determined at runtime.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.
In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.
The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit moves the _ms_to_ticks() and __ticks_to_ms() functions
close to each other in order to improve code readability.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The kernel incorrectly assumed, that system timer frequency is always
divisible without remainder by couple "natural" tick rates (like 100).
As result on some SoCs, time calculations was not correct, producing
strange effects (invalid sleep times, incorrect k_uptime_get() etc.).
This commit enables accurate, but costly (using 64-bit math) tick <-> ms
conversion if the selected tick interval is not exact due to hardware
limitations.
Also, this commit fixes tests in which removed _ms_per_tick were used.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
This commit removes fixed list of "good" sys_clock_ticks_per_sec values
which usage results in integer _ms_per_tick value.
Instead of using the list, simply check if MSEC_PER_SEC could be divided
without remainder by sys_clock_ticks_per_sec.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
This commit moves all implementations of the _ms_to_ticks() into
single file. Also, the function is now inline even if
_NEED_PRECISE_TICK_MS_CONVERSION is defined.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Make these "choice" items instead of a single boolean that implies the
element unset.
Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is a public macro which calculates the size to be allocated for
stacks inside a stack array. This is necessitated because of some
internal padding (e.g. for MPU scenarios). This is particularly
useful when a reference to K_THREAD_STACK_ARRAY_DEFINE needs to be
made from within a struct.
Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
This commit creates a new header file (kernel_include.h) that
contains all header files to be included by kernel_init.h.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state". It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.
Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll. The disadvantage is the 4
bytes of thread space needed. Advantages:
+ Cleaner API, it's now internal to poll instead of being globally
visible.
+ The thread_state bit space is just one byte, and was almost full
already.
+ Smaller code to write/test a full word and not a bitfield
+ Words are atomic, so no need for one of irq lock/unlock pairs.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.
This is problematic:
- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.
Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_work_init() was not initializing all fields in the k_work struct.
Mainly, the atomic_clear_bit() function call was reading a possibly
uninitialized value, clearing a bit, and assigning it back to the
`flags` member. The `_reserved` member was never initialized.
With the struct now initialized with the _K_WORK_INITIALIZER() macro,
initialization is consistent regardless of how a `struct k_work` is
initialized.
This fixes the Valgrind issues found in #7478.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
It's not possible to enforce that K_THREAD_STACK_SIZEOF()
returns the original number passed to K_THREAD_STACK_DEFINE().
Some arches need to round this number up in order to satisfy
alignment constraints.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>