Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.
- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
shift since they are power-of-two factors of 1000.
In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.
These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).
Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.
Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This limits the execution contexts that will go over the loop in
_unpend_first_thread() to only ISRs of very high priority that are
preempting the system clock timer ISR, and only during the time it is
handling timeouts.
Change-Id: Iaf0500d28a2de5e077c9cf9861a5a70244127d58
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use a short name for this option CONFIG_OBJECT_TRACING.
Change-Id: Id27de7ef9ca299492b6b7d2324d9f5bcf8059a31
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Reorganise and cleanup Kernel Kconfig options and group options of the
same area under Menus to ease readability and to have a better structure
when using menuconfig.
Change-Id: Ic6b39730297861367abd345ede35e41c046c099d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move those into a separate Kconfig file and include them instead.
Change-Id: Ifa25d6ec92937080ad5970af7ca5c3f07ddec961
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
rename NANOKERNEL_TICKLESS_IDLE_SUPPORTED to
TICKLESS_IDLE_SUPPORTED and remove nanokernel occurances in Kconfig
files.
Make TICKLESS_IDLE depend on hardware that supports it.
Change-Id: I6a2e4fb0f7cf4b45475b48e71823ea089ee98759
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.
Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
And also remove now obsolete ARCH_HAS_TASK_ABORT.
ARC does not need the options either.
Change-Id: Ie52d63178a367ce12b911dacfe2d389f4f75ed2d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The system work queue spawns a coop thread to hanlde the work items. If
it is spawned before the kernel is up and the initialization dummy
thread's priority is lower, there will be a context switch into the
system work queue's thread at that time, before the kernel is ready to
handle this.
Change-Id: I879659ab58231c5a5cfaa34f2f65c2eccab99142
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Legacy applications still need that, otherwise kernel objects are not
configured correctly. Will be removed later.
Change-Id: I22df10e4adcc11f035f9813bea8c93dd1a560a1d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
For very constrained systems, like bootloaders.
Only the main thread is available, so a main() function must be
provided. Kernel objects where pending is in play will not behave as
expected, since the main thread cannot pend, it being the only thread in
the system. Usage of objects should be limited to using K_NO_WAIT as the
timeout parameter, effectively polling on the object.
Change-Id: Iae0261daa98bff388dc482797cde69f94e2e95cc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
A thread cannot have a coop priority in this case. It turns out a
priority is not needed when a thread is not inserted in the ready queue,
which is the case with the dummy thread.
The comment was also out-of-date, since it referred to a nanokernel
concept.
Change-Id: Id117501164bd72383d53f3df13030cf95dadc38b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.
Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.
The kernel internals now make use of these APIs instead of the legacy
ones.
Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The numbers of timeouts that expire on a given tick is arbitrary. When
handling them, interrupts were locked, which prevented higher-priority
interrupts from preempting the system clock timer handler.
Instead of looping on the list of timeouts, which needs interrupts being
locked since it can be manipulated at interrupt level, timeouts are
dequeued one by one, unlocking interrupts between each, and put on a
local 'expired' queue that is drained subsequently, with interrupts
unlocked. This scheme uses the fact that no timeout can be prepended
onto the timeout queue while expired timeouts are getting removed from
it, since adding a timeout of 0 is prohibited.
Timer handlers now run with interrupts unlocked: the previous behaviour
added potentially horrible non-determinism to the handling of timeouts.
Change-Id: I709085134029ea2ad73e167dc915b956114e14c2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
These tick computation can take a significant amount of time, and there
is no reason to do them with interrupts locked.
Change-Id: I2d8803ec6025b827e9450fa493084bbf8be98bad
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use _INACTIVE instead of hardcoding -1.
_EXPIRED is defined as -2 and will be used for an improvement so that
interrupts are not locked for a non-deterministic amount of time while
handling expired timeouts.
_abort_timeout/_abort_thread_timeout return _INACTIVE instead of -1 if
the timeout has already been disabled.
Change-Id: If99226ff316a62c27b2a2e4e874388c3c44a8aeb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Disable MDEF option and set it only in legacy projects.
Change-Id: I2e1f011eb1f876af929140e36f71f0efb5e955c1
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The ARG_UNUSED macro is added to avoid compiler warnings.
Change-Id: Ie9b72c94191318c1d667d7929eb029098c62e993
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
Use uint32_t for counters instead of int to avoid compiler warnings.
Change-Id: Ie96dfaca650b5f91562c0740c18610fc40968be6
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
mem_pool structures use uint32_t for counters and size_t
to specify sizes, however some routines in mem_pool.c
make use of int for similar purposes. This commit fixes
that situation by updating some variables to match
mem_pool data types.
Change-Id: I0aa01c27e512d06d40432e8091ed8fd9d959970c
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
Factor out the code for evaluating the remaining time for _timeout
structs so that it can also be used for other objects besides k_timer
structs (like k_delayed_work, coming in a subsequent patch).
Change-Id: I243a7b29fb2831f06e95086a31f0d3a6c37dad67
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Use the SYS_INIT() mechanism to invoke the sys_rand32_init() function
in random drivers that require an initializer. Remove all empty
sys_rand32_init() instances.
The existing explicit sys_rand32_init() function runs immediately after
PRE_KERNEL_2 before stack canaries are initialized. In order to get
equivalent behaviour with sys_rand32_init() we set SYS_INIT() to
initialize the random drivers at the lowest priority of PRE_KERNEL_2.
Change-Id: I4521e44daac806bc4eef01ce7fdf2ba5367e0587
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
To guarantee that the compiler does not reorder the execution of
irq_lock() with preceding operations, a volatile qualifier is
placed before the declaration of the ticks variable, which then
ensures that irq_lock() is executed after the tick calculation but
before accessing the ready and timeout queues.
Without the volatile keyword interrupts will be disabled during the
calculation of the ticks, which increases interrupt latency
significantly.
Change-Id: I2da82a1282e344f3b8d69e9457b36a4cb1d9ec18
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Use standard library calls like memset/memcpy for setting up BSS and DATA
sections during system initialization, this helps to take advantage of
architecture specific optimizations from standard library.
Change-Id: Ia72b42aa65b44d1df7c22dd1fbc39a44fa001be9
Signed-off-by: Mahavir Jain <mjain@marvell.com>
Make boot banner more informative by adding kernel version string
Change-Id: I21865ea3a001fba2c30fe58e6e052aae59fef3e2
Signed-off-by: Mahavir Jain <mjain@marvell.com>
If delayed work is already submitted or completed, then subsequent
cancel should return -EINVAL as return status.
Fixes ZEP-1373.
Change-Id: I16bbacca7e31a5a5d8e5a89e729d70302ada6223
Signed-off-by: Mahavir Jain <mjain@marvell.com>
Interrupt must be locked before inserting a timeout in the timeout
queue.
Change-Id: Iab0bf01f393e66a6403d2f85e899dbf737da4afc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The fact that a thread is timing out was tracked via two flags: the
K_TIMING thread flag bit, and the thread's timeout's
delta_ticks_from_prev being -1 or not. This duplication could
potentially cause discrepancies if the two flags got out-of-sync, and
there was no benfits to having both.
Since timeouts that are not parts of a thread rely on the value of
delta_ticks_from_prev, standardize on it.
Since the K_TIMING bit is removed from the thread's flags, K_READY would
not reflect the reality anymore. It is removed and replaced by
_is_thread_prevented_froM_running(), which looks at the state flags that
are relevant. A thread that is ready now is not prevented from running
and does not have an active timeout.
Change-Id: I902ef9fb7801b00626df491f5108971817750daa
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove NO_METRIC, which is not referenced anywhere anymore.
Change-Id: Ieaedf075af070a13aa3d975fee9b6b332203bfec
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Now that we're out of the unified kernel development phase, turn off
that debugging option.
Change-Id: I89decbdf445b1ba111a829edf2c8a36846419586
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
It was possible for a dummy thread to be not timing, but not having
timeout.delta_ticks_from_prev not be -1 at the same time, which is a big
no-no.
Use _init_thread_base() to do a full initialization of the dummy thread.
Fixes ZEP-1312.
Change-Id: I16a2373be3329c142cf26f5dca6bfdbe6014ac5e
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.
Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Renamed main_stack and idle_stack, to _main_stack and
_idle_stack, respectively, and made them globals. This does
not affect performance. They are still kept kernel private
symbols and not part of kernel API.
This will allow these symbols to be referenced in calls to
stack_analyse misc functions to profile stack usage in
applications.
Change-id: Id6b746c5cfda617c26901c6e62c3e17114471f57
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
It's possible that an architecture needs a custom way of switching to
the main() task, rather than using _Swap() with a dummy thread.
Change-Id: I14e9bc67be35174ff16209bcea27b18a069ff754
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.
Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
In addition to more priorities taking more memory to host them, finding
the next thread to run when it is not cached is slower since each extra
set of 32 priorities maps to a loop iteration. That loop is remove
entirely when the number of priorities is less than 32 (31 + the idle
thread).
Fixes ZEP-1303.
Change-Id: I3205df90d379a0f4456ff1d7f1aaa67ad2cddf15
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Rewrites the timestamping logic to always generate timestamps
via a function pointer that is initialized to sys_cycle_get_32(),
but can be changed to point to a user-supplied function. This
eliminates the need for an if/then/else construct in every place
that a timestamp is generated.
Change-Id: Id11f8c41b193a93cece16565978a525056010f0e
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Prepares the kernel event logger APIs for inclusion in the
API guide. Also corrects a couple of other issues:
* Gets rid of obsolete thread monitor code.
* Renames "timer_func" global variable to "_sys_k_timer_func"
to align it with kernel naming conventions.
Change-Id: I93d403f83ae44ff45dda489c2ead7bfec6ce1fa3
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Event logger APIs still express timeout delays in ticks;
need to convert to milliseconds when using unified kernel APIs.
Change-Id: I5fab66be660621cd2029417eaff3758e3ef4ba2c
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
When moving arch-specific thread structure to arch-agnostic, some field
accesses were missed when used in K_DEBUG statements, which are turned
off by default.
Change-Id: Ife0f49b8185a0db468deab73555f7034f20ca3e8
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.
Stack size should be size_t.
Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
NANOKERNEL is obsolete and this kernel service is still using it causing
deperecaton warnings. Move it to POST_KERNEL
Change-Id: I17fabd080645f93a8599f4ea25da844e1ec5f4bb
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Replaces confusing (and excessively long) configuration option
names with more intuitive names. Also enhances the description
of each option to clarify its use.
Change-Id: If4d4541407627482b1e90302cfc9df3bc8130d44
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
SYS_DLIST_FOR_EACH_NODE() is marked as non-safe when an item is removed
from the list while looping over it. This is not true per-se, since the
item, when removed, keeps its next and prev pointers intact; however, it
is true if the item is then put into a list, be it a different one or
the same one. To prevent this, SYS_DLIST_FOR_EACH_NODE_SAFE() must be
used.
_mbox_message_put() can remove items from the rx queue and then put them
in the ready queue: this would cause the loop to start processing other
ready threads as item in the rx queue.
k_mbox_get() also removes items, from the tx queue, but does not seem to
add them to another list; however, it now uses the safe version as well,
since that is the proper usage.
Change-Id: Ieccbff238fc8a036c0d53d873eaaf55f4f5a14af
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
There was a lot of duplication between architectures for the definition
of threads and the "nanokernel" guts. These have been consolidated.
Now, a common file kernel/unified/include/kernel_structs.h holds the
common definitions. Architectures provide two files to complement it:
kernel_arch_data.h and kernel_arch_func.h. The first one contains at
least the struct _thread_arch and struct _kernel_arch data structures,
as well as the struct _callee_saved and struct _caller_saved register
layouts. The second file contains anything that needs what is provided
by the common stuff in kernel_structs.h. Those two files are only meant
to be included in kernel_structs.h in very specific locations.
The thread data structure has been separated into three major parts:
common struct _thread_base and struct k_thread, and arch-specific struct
_thread_arch. The first and third ones are included in the second.
The struct s_NANO data structure has been split into two: common struct
_kernel and arch-specific struct _kernel_arch. The latter is included in
the former.
Offsets files have also changed: nano_offsets.h has been renamed
kernel_offsets.h and is still included by the arch-specific offsets.c.
Also, since the thread and kernel data structures are now made of
sub-structures, offsets have to be added to make up the full offset.
Some of these additions have been consolidated in shorter symbols,
available from kernel/unified/include/offsets_short.h, which includes an
arch-specific offsets_arch_short.h. Most of the code include
offsets_short.h now instead of offsets.h.
Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The unified kernel is now the only supported kernel, so this
option is unnessary. Eliminating this option also enables
the removal of some legacy code that is no longer required.
Change-Id: Ibfc339d643c8de16a2ed2009c9b468848b8b4972
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
k_alert_init() needs to set the "flags" field of its associated
work item to zero, indicating that the work item has not yet
been submitted to the system workqueue. Using the standard work
item initializer macro ensures this is done correctly.
Change-Id: I0001a5920f20fb1d8dc182191e6a549c5bf89be5
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
The API to disable _sys_soc_resume notification is currently
called _sys_soc_disable_wake_event_notification. This is
misleading because it is possible that the ISR from which
_sys_soc_resume is called could be from a different interrupt
with higher priority that happened before interrupts were
enabled. More accurately, it is a notification of exit from
kernel idling after pm operations.
Jira: ZEP-1271
Change-Id: I83747f2cacac1bc17f135d12f4aa4478970fc02d
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
_sys_soc_resume hook is over loaded to handle to different
scenarios. It is primarily called to notify exit of kernel idling
after PM operations. It is also used to notify exit from deep sleep.
This is very confusing and also makes the implementation of the
hook function very difficult because of very different conditions
involved in the 2 different use cases. Further, users may not require
either or both use cases depending of their custom boot flow and
power state handling. To simplify, create a separate hook for the
purpose of deep sleep exit notification. Use the existing one to
only notify kernel idling exit after PM operations.
Jira: ZEP-1256
Change-Id: I96350199a0fd37f16590c8ee5302a94a3d71b8ba
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
There was no check to see if the current context was running an ISR when
taking a decision whether to do a context switch or not.
Change-Id: Ib9c426de8c0893b3d9383290bb59f6e0e41e9f52
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Useful for finding out if the current thread is protected against
preemption when using non-preemption to protect data structures.
Change-Id: Ib545a3609af3646ba49eeeb5a2c50dc51af010d4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Oversight. These functions are used extensively in the kernel guts, but
are also supposed to be an API.
k_sched_lock used to be implemented as a static inline. However, until
the header files are cleaned-up, and everything, including applications
get access to the kernel internal data structures, it must be
implemented as a function. To reduce the cost to the internals of the
kernel, the new internal _sched_lock() contains the same implemetation,
but is inlined.
Change-Id: If2f61d7714f87d81ddbeed69fedd111b8ce01376
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
An application-supplied main() routine is now considered to be
essential to system operation. Thus, if main() experiences an
error that aborts the main thread a fatal system error is raised.
Note: If main() completes its work and does a standard return-
to-caller the main thread terminates normally.
Change-Id: Icc9499f13578299244a856a246ad2a7d34a72f54
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
A thread defined via a legacy MDEF that belongs to the FPU or
SSE task group must set the thread option bits for FP or SSE
register use prior to being spawned.
If this is not done, and the kernel is configured for SSE support,
the kernel will auto-enable the thread's use of floating point
so that the thread saves SSE register context info even if it
belongs to just the FPU task group, which could cause the thread
to overflow its stack.
Note that this change only increases footprint for x86-based
applications that enable floating point register sharing.
Change-Id: Idfe4d20bcd7bc42b4cee6ac40ad7987e2a45ccf6
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
PRIMARY, SECONDARY, NANOKERNEL, MICROKERNEL init levels are now
deprecated.
New init levels introduced: PRE_KERNEL_1, PRE_KERNEL_2, POST_KERNEL
to replace them.
Most existing code has instances of PRIMARY replaced with PRE_KERNEL_1,
SECONDARY with POST_KERNEL as SECONDARY has had a longstanding bug
where the documentation specified SECONDARY ran before the kernel started
up, but actually ran afterwards.
Change-Id: I771bc634e9caf7f17dbf214a270bc9967eed7d32
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Verify the thread priorities are within the bounds when starting a new
thread and when changing the priority of a thread.
Change-Id: I007b3b249e4b80235b6439cbee44cad2f31973bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Since lower-numbered thread priorities are higher, the code can be
misleading when comparing priorities, and often require the same type of
comments. Instead, use utility inline functions that does the
comparisons.
_is_prio_higher already existed, but add comparisons for "lower than",
"higher than or equal to" and "lower than or equal to".
Change-Id: I8b58fe9a3dd0eb70e224e970fe851a2575ad468b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
- Add missing irq_lock() before invoking power management.
- Only yield if the idle thread is a coop thread (in coop-only
configurations).
Change-Id: I030795e782590b3023f1d7883bbd058da2c45f4f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Add an assertion against unlocking mutex that is not locked.
Change-Id: I1032fb904e364015b486502c035529c8fe31de7a
Signed-off-by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
include/ will be cleaned up in a subsequent patch.
Change-Id: If3609f5fc8562ec4a6fec4592aefeec155599cfb
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Making a reference to the common work queue code should not necessarily
drag in the system workqueue, since it is possible to use a workqueue
that is not the system workqueue. This is done by moving the system
workqueue into its own code module.
Moving the system workqueue to its own code module allows removing the
NANO_WORKQUEUE and SYSTEM_WORKQUEUE kconfig options, and compiling the
common workqueue code and system workqueue all the time. They are only
linked in the final image if a reference to them exist, same as the
other kernel modules.
Change-Id: I6f48d2542bda24f4702e7c2e317818dd082b3c11
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
It is now possible to specify the expiry and stop functions
of a statically-defined timer, just as can be done for a
dynamically-defined timer.
[Part of fix to ZEP-1186]
Change-Id: Ibb9096f3fdafdc6c904184587f86ecd52accdd66
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Adds standard prefix to symbolic option that flags a thread
as essential to system operation.
Change-Id: Ia904a81ce343fdd1cd44caaaeae641d822777f9b
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Port the power management test app to use unified kernel.
Change-Id: I2f10748be5ca7d9792f6e97c35f5f2aabab769e7
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
QMSI 1.3 natively supports restoring the SoC and peripherals
after sleep.
The Zephyr Power Management shim layer is updated
in order to support QMSI functions.
The following functions have been added:
void _sys_soc_set_power_state(enum power_state);
void _sys_soc_power_state_post_ops(void);
In order to fully support deep sleep, the function
_sys_soc_set_power_state now support saving and
restoring CPU context and returns to the application.
_sys_soc_set_power_state function also abstracts
QMSI cpu states and enable the application to choose
between C1/C2 or C2LP states.
The QMSI power states are mapped as follows:
SYS_SOC_POWER_STATE_CPU_LPS -> power_cpu_c2lp
SYS_SOC_POWER_STATE_CPU_LPS_1 -> power_cpu_c2
SYS_SOC_POWER_STATE_CPU_LPS_2 -> power_cpu_c1
SYS_SOC_POWER_STATE_DEEP_SLEEP -> power_soc_deep_sleep
SYS_SOC_POWER_STATE_DEEP_SLEEP_1 -> power_soc_sleep
The following functions have been removed:
void _sys_soc_set_power_policy(uint32_t pm_policy);
int _sys_soc_get_power_policy(void);
FUNC_NORETURN void _sys_soc_put_deep_sleep(void);
void _sys_soc_put_low_power_state(void);
void _sys_soc_deep_sleep_post_ops(void);
Those changes are propagated to the samples.
All calls to QMSI are removed.
Jira: ZEP-1045, ZEP-993, ZEP-1047
Change-Id: I26822727985b63be0a310cc3590a3e71b8e72c8c
Signed-off-by: Julien Delayen <julien.delayen@intel.com>
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Defines an object tracing list for each kernel object type
that supports object tracing, and ensures that both statically
and dynamically defined objects are added to the appropriate list.
Ensure that each static kernel object is grouped together with
the other static objects of the same type. Revise the initialization
function for each kernel type (or create it, if needed) so that
each static object is added to the object tracing list for its
associated type.
Note 1: Threads are handled a bit differently than other kernel
object types. A statically-defined thread is added to the thread
list when the thread is started, not when the kernel initializes.
Also, a thread is removed from the thread list when the thread
terminates or aborts, unlike other types of kernel objects which
are never removed from an object tracing list. (Such support would
require the creation of APIs to "uninitialize" the kernel object.)
Note 2: The list head variables for all kernel object types
are now explicitly defined. However, the list head variable for
the ring buffer type continues to be implicitly defined for the
time being, since it isn't considered to be an core kernel object
type.
Change-Id: Ie24d41023e05b3598dc6b344e6871a9692bba02d
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Allows event objects to pend signals in a cumulative way using
the semaphore in a non-binary way.
Jira: ZEP-928
Change-Id: I3ce8a075ef89309118596ec5781c15d4f3289d34
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Enables boot time timestamps for unified kernel.
Also Splits the source code into microkernel and nanokernel versions
instead of having common code. Not only does this make the code for
each project easier to read, but it also easily allows the nanokernel
version to link against the correct version of main().
Change-Id: Ie0afa2272c3347ebdacc0e3daeebbfe9583fe596
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Before, the kernel would run the main() function twice; first
as an entry in k_task_list, and then again from _main(). The
_main() invocation would be using a potentially insufficient stack
size.
Now if an MDEF file declares a main() thread, invoke it from
_main(), but honor the desired priority and stack size.
Issue: ZEP-1145
Change-Id: I1abf38fc038e270059589b11d96fae1b3f265208
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Event is such an overloaded and generic term (event logger, *kernel*
event logger, "protocol" events in other subsystems, etc.), that it is
confusing for the name an object. Events are kinda like signals, but not
exactly, so we chose not to name them 'signals' to prevent further
confusion. "Alerts" felt like a good fit, since they are used to "alert"
an application that something of significance should be addressed and
because an "alert handler" can be proactively registered with an alert.
Change-Id: Ibfeb5eaf0e6e62702ac3fec281d17f8a63145fa1
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This better aligns with the actual functionality of the object.
Change-Id: I70abf54f994e92abd7367251089ea4f735d273fe
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Fix the error in thread rescheduling:
Fix Fast IRQ exit routine error when it reschedules threads if
(prio >= 0) || (sched_locked == 0) || (next_thread == _current),
while the correct condition for thread rescheduling is:
(prio >= 0) && (sched_locked == 0) && (next_thread != _current),
Fix regular IRQ error when the regular IRQ exit routine rescheduled
threads when (next_thread == _current) instead of
(next_thread != current).
Increased IDLE_STACK_SIZE for ARC architecture, to hold saved
registers.
Change-Id: I1d87a968e231e13822844b7564567e6ca310cde2
Signed-off-by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
They were the same, standardize on the lowercase one.
Change-Id: I8bca080e45f3e0970697d4451e468b9081f96f5f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some changes that went into k_idle.c were missing in idle.c
causing errors while building power management code for
unified kernel. Added the missing changes.
Tested with power_mgr app built for unified kernel.
Jira: ZEP-1139
Change-Id: I9fe005544f7ee69d3cb3ff10c649be28037fcf15
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Adapting to unified kernel naming of 'coop thread'.
Change-Id: I66cb766c2269acf0867e434bc21f633ea1111f89
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Needed by the kernel event logger when it records a context switch.
The kernel event logger releases a semaphore when a new event is
available in the log so that a thread can consume the event. However,
giving that semaphore cannot add a context switch event itself in the
log or the logger would be caught in an infinite loop.
Change-Id: I571a4aa0d302775e09cdc2d654a6b61f8b2e42c7
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Idle thread may need a bigger stack depending on extra work it has to
do, like power management or kernel event logging.
Change-Id: Iff691d7838036d602bad79799820b68ad55ad00f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Eliminates references to "fibers" and "tasks". Eliminates unnecessary
doxygen tags for internal routines. Miscellaneous other corrections
and improvements.
Change-Id: I0272fa477773c075799b67138bad5debcfd6b01e
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Existing code wasn't removing a thread from the kernel's list
of active threads if the thread terminated or aborted. (It did
remove it if the delayed starting of a thread was cancelled.)
Change-Id: Icc97917e33765696480d0e9bf31e882ef555d095
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
This is needed because some thread termination paths can be
invoked with no guarantee that thread preemption won't happen.
(It also aligns with the approach taken by the thread monitoring
initialization code.)
Change-Id: I28a384e051775390eb047498cb23fed22910e4df
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Renames _thread_exit() to _thread_monitoring_exit() to make
its purpose clearer. Revises the associated comments and
removes unnecessary doxygen tags.
Change-Id: I010a328d35d2d79d2a29b9d0b6c02097bb655989
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
The new kernel doesn't support the thread abort handler concept,
so only the legacy API for this capability is needed.
Change-Id: Ie809092e73b784504c3d298911d216bed8dd8993
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Fleshes out the prototype heap memory pool support
to make it fully operational. Noteworthy changes are
listed below:
Tweaks arguments to k_malloc() and k_free() to be more like
malloc() and free(). Similarly, modifies k_free() to take
no action when passed a NULL pointer.
Now stores the complete block descriptor at the start
of any block allocated from the heap memory pool. This
increases memory overhead by 4 bytes per block, but
streamlines the allocation and freeing algorithms. It also
ensures that the routines will work if the block descriptor
internals are changed in the future.
Now allows the heap memory pool to be defined using the
HEAP_MEM_POOL_SIZE configuration option. This will be the
official configuration approach in the unified kernel.
Also allows the heap memory pool to be defined using the
(undocumented) HEAP_SIZE entry in the MDEF. This is provided
for legacy reasons only.
Co-locates memory pool initialization code to keep the line
that causes memory pool initialization to be done during booting
right next to the routine that does the initialization.
Change-Id: Ifea9d88142fb434d4bea38bb1fcc4856a3853d8d
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Reworks k_work_q_start() so that it accepts its 3 configuration
settings directly, rather than forcing the caller to pass in a
configuration data structure.
Change-Id: Ic0bd1b94f1a1c8e0f8a84b3bd3677d59d0708734
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
PM control function is used only by the PM subsystem. Update
documentations to make it clear and name the relevant structures and
functions with _pm_ in the name.
Jira: ZEP-1044
Change-Id: I29e5b7690db34a228ed30a24a2e912e1360a0090
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Needed to resolve various undeclared symbols when SYS_POWER_MANAGEMENT
is enabled.
Jira: ZEP-1073
Change-Id: I21db2580efb15c80d84d9163fe9e8245d6dc0391
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Moves the source code for ring buffers to the 'misc' area, since
it isn't really a central component of the kernel. (This also
aligns the ring buffer source code with its include file, which
is already under 'include/misc'.)
Change-Id: I765a383a05f51fa67d154446f412496e689f9702
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Add 'legacy_' prefix, as per the revised naming convention.
Change-Id: I0eaff33a561523ad11621b3104862c574930556e
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Since the unified kernel's build system doesn't properly handle
a file in the 'legacy' directory if it contains an initialization
function, some legacy code can't be located there. To avoid confusion,
the revised convention for legacy code is to keep any file that
contains only legacy code in the main kernel directory, and to give
it a "legacy_" prefix.
Change-Id: I019adc8f36611d4481bdcf31dde66597d4cf54ae
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Ensures that all APIs which accept a timeout value wait for at least
the specified amount of time, and do not time out prematurely.
* The kernel now waits for the next system clock tick to occur before
the timeout interval is considered to have started. (That is, the only
way to ensure a delay of N tick intervals is to wait for N+1 ticks
to occur.)
* Gets rid of ticks -> milliseconds -> ticks conversion in task_sleep()
and fiber_sleep() legacy APIs, since this introduces rounding that
-- coupled with the previous change -- can alter the number of ticks
being requested during the sleep operation.
* Corrects work queue API that was incorrectly shown to use a delay
measured in ticks, rather than milliseconds.
Change-Id: I8b04467237b24fb0364c8f344d872457418c18da
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Provides users with a more compact and intuitive API for kernel
timers.
Provides legacy support for microkernel timers and nanokernel
timers by building on the new kernel timer infrastructure.
Each timer type requires only a small amount of additional
wrapper code, as well as the addition of a single pointer
field to the underlying timer structure, all of which will be
easily removed when support for the legacy APIs is discontinued.
Change-Id: I282dfaf1ed08681703baabf21e4dbc3516ee7463
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
- Reorders parameters where necessary
- Adds alignment parameter to K_MSGQ_DEFINE() for buffer alignment
- Renames parameters where necessary so they are more intuitive
Change-Id: I0b53105c04109127897bf4790e6908082f82da4e
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Now invokes any microkernel-level init functions used by
legacy applications.
Change-Id: I8f68ddba764f13d037a679b74121713983f4aaba
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
K_THREAD_DEFINE() can no longer specify a thread group. However, it now
accepts a 'delay' parameter just as k_thread_spawn() does.
To create a statically defined thread that may belong to one or more thread
groups the new internal _MDEF_THREAD_DEFINE() macro is used. It is only used
for legacy purposes.
Threads can not both have a delayed start AND belong to a thread group.
Jira: ZEP-916
Change-Id: Ia6e59ddcb4fc68f1f60f9c6b0f4f227f161ad1bb
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Tweak mailbox API parameters so that not only are their descriptions
correct, but their names match across header file and C file.
Change-Id: Ieeb3a40fb7c535a5eac2e06533d01d13aaf69181
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
- Reorders parameters where necessary
- Adds alignment parameter to K_PIPE_DEFINE()
- Renames parameters where necessary so they are sync'd
between header and source files
Change-Id: I4f2367abc28aff646cc90beb9f08bb266e143b0c
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Reverts a change that was made to the defragmentation routine
when memory pool support was ported from the microkernel to the
unified kernel.
The change was intended to improve the readability of the algorithm,
but introduced a subtle change in behavior. For example, when
k and i are zero and the number of block set entries is one
the original algorithm did not execute the while loop, while the
revised algorithm executed the loop once.
Change-Id: I2b0263a8d7b80846013c459847817d314f803457
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Build breaks when enabling CONFIG_NEWLIB_LIBC because it has its own
sched.h file.
This is a bad symptom of a greater issue: the build system passes many
'-I<path>' options to the compiler, and that allows including header
files by simply specifying their names (when located somewhere else than
<zephyr>/include/) and can cause clashes when several files in different
locations have the same name, like in this case.
Fixes ZEP-1062.
Change-Id: I81d1d69ee6669a609cd0c420b1b8f870d17dcb67
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Gets rid of official support for dynamic timer allocation
in the unified kernel, since users can easily define and
initialize timers at any time. Legacy support for dynamic
timers is maintained for backwards compatibility reasons
for the time being ...
Change-Id: I12b3e25914fe11e3886065bee4e96fb96f59b299
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Folds this API into k_stack_init() to provide a single API
that requires the caller to pass in the stack buffer, just
as is done for other kernel objects initialization APIs
involving the use of a buffer.
Change-Id: Icad5fd6e5387d634738d1574f8dfbc5421cd642d
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
* Gets rid of k_current_priority_get(). Users can just call
k_thread_priority_get(k_current_get()) instead.
* Declares k_thread_priority_get() in kernel.h, where it
really belongs.
* Removes duplicate declaration of k_thread_priority_set().
Change-Id: I616ae6f2e06c95ecba3b92324186b3fa29162fd1
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Do not include timeout_q.h when !SYS_CLOCK_EXIST, this allows removing
_unpend_thread_timing_out() in that case.
Have _abort_thread_timeout() return 0 (success) when !SYS_CLOCK_EXIST.
With this change, the minimal footprint nanokernel project compiles for
the unified kernel.
Change-Id: Ifbf9167a82fb3ebcf6941bf3f85c105c23c9060c
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
It is always needed by the kernel, since the return codes are now
errnos. CONFIG_ERRNO is the mechanism for having a per-thread errno, not
using errno values.
Change-Id: I4ed14896a342f4122793d91b13c41b4a6a74716d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Timers are based off timeouts now, which can only be enabled when the
system clock is enabled. So the three are really just one setting now.
Keep the NANO_TIMERS and NANO_TIMEOUTS around for now until all
middleware that rely on them is updated. They are always enabled when
SYS_CLOCK_EXISTS is enabled.
Change-Id: Iaef1302ef9ad8fc5640542ab6d7304d67aafcfdc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
- ensure dummy thread's stack is aligned
- rename nano_init() to prepare_multithreading
- move _Swap() to main thread into its own function
Change-Id: I6c8dbe2a4e034f3db90b55d1a5e30bc73bac3d50
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Rename remaining functions to fit with kernel naming convention for
internal interfaces. Use struct k_thread instead of struct tcs.
Change-Id: I28cd7f6f4d7ddaeb825c8d2999242d8d2dd93f31
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>