Commit Graph

2234 Commits

Author SHA1 Message Date
Andrew Boie
ad110d495e kernel: fatal: check if _current is NULL
Arches which have custom swap to main can have _current be
NULL very, very early in the boot process. Check this to
avoid an infinite loop of fatal errors.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-24 12:54:32 -04:00
Andrew Boie
f5c3fc498b kernel: add arch_mem_unmap() interface
The core kernel does not use this yet, but it will be later used
as part of infrastructure for memory-mapping stacks, as detailed
in #28899.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-23 22:02:47 -04:00
Carlo Caione
47c592cd32 mmu: Fix mapping_pos calculation
In the MMU code mapping_pos is miscalculated when for example
SRAM_BASE_ADDRESS==0x40000000 and KERNEL_VM_SIZE==0xc0000000 getting a
mapping_pos of 0x0.

The problem is that we must cast the two values to uintptr_t before
casting the result to avoid the rollover to 0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-23 16:24:04 -04:00
Andrew Boie
933b420235 kernel: add context pointer to thread->fn_abort
For compatibility layers like CMSIS where thread objects
are drawn from a pool, provide a context pointer to the
exited thread object so it may be freed.

This is somewhat obscure and has no supporting APIs or
overview documentation and should be considered a private
kernel feature. Applications should really be using
k_thread_join() instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-22 23:32:37 -04:00
Andrew Boie
fa6db61eb2 userspace: do nothing if added to same domain
If we call k_mem_domain_add_thread() to a memory domain
the thread already belongs to, do nothing.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-22 16:47:07 -07:00
Anas Nashif
d2c71796af kernel: document k_sleep with K_FOREVER
When calling k_sleep with K_FOREVER as the timeout value, we consider
this as a suspend call.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-22 07:00:15 -04:00
Anas Nashif
081605ee23 kernel: do not queue a thread that is already queued
Do not add a thread to the run queue if it was already added.

Fixes #29244

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-22 07:00:15 -04:00
Anas Nashif
bf69afcdae kernel: only resume suspended threads
Do not try to resume a thread that was not suspended.

Fixes #28694

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-22 07:00:15 -04:00
Andrew Boie
52bf482471 userspace: fix k_mem_domain_init()
The synchronous arch hook wasn't being invoked when
k_mem_domain_init() was called with a partition list.
This could result with regions not being programmed in
the page tables as expected.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-21 10:20:49 -07:00
Carlo Caione
f161223637 userspace: Fix thread index type in z_thread_perms_all_clear()
The type for the thread index returned by thread_index_get() must be
casted to int when comparing with (-1). Directly using uintptr_t is
breaking the ARMv8 implementation where where the check (index != -1) is
verified also when no thread index is returned.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-21 08:00:35 -04:00
Andy Ross
a8d5437799 soc/xtensa: Misc. checkpatch fixups
Code style fixes.  Kept separate from the original changes to permit
easier rebasing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-10-21 06:38:53 -04:00
Andy Ross
4a8b3d194c kernel/poll: Mark incompatibility with KERNEL_COHERENCE
The k_poll implementation places a struct _poller on the stack and
shares it with other threads, which is incompatible with the
KERNEL_COHERENCE model of cached stacks.

Make this a hard build failure instead of a kconfig dependency for
clarity.  The failures if a user actually enables both are subtle and
difficult to debug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-10-21 06:38:53 -04:00
Andy Ross
f6d32ab0a4 kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches.  Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.

Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory.  Where this is
available, place all writable data sections by default into the
coherent region.  An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.

Stack memory will be incoherent by default, as by definition it is
local to its current thread.  This requires special cache management
on context switch, so an arch API has been added for that.

Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent.  We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks.  In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-21 06:38:53 -04:00
Andrew Boie
348a0fda62 userspace: make mem domain lock non-static
Strictly speaking, any access to a mem domain or its
containing partitions should be serialized on this lock.

Architecture code may need to grab this lock if it is
using this data during, for example, context switches,
especially if they support SMP as locking interrupts
is not enough.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Andrew Boie
9f87deafd2 userspace: fix k_mem_domain_destroy()
This deprecated API won't be removed for one more release,
ensure it doesn't put the kernel into a bad state as it
currently sets all the member thread domain assignment to
NULL which is not what we want.

Have it reassign all member threads to the default domain.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Andrew Boie
b5a71f74a8 userspace: remove threads from domain on abort
When threads exited we were leaving dangling references to
them in the domain's mem_domain_q.

z_thread_single_abort() now calls into the memory domain
code via z_mem_domain_exit_thread() to take it off.

The thread setup code now invokes z_mem_domain_init_thread(),
avoiding extra checks in k_mem_domain_add_thread(), we know
the object isn't currently a member of a doamin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
David Komel
211e5b73c0 kernel: k_busy_wait() should return immediately on zero timeout
When CONFIG_ARCH_HAS_CUSTOM_BUSY_WAIT is not defined, cycles_to_wait
is calculated using a division operation. This calculation could take a
significant amount of time (a few microseconds on some architectures,
depending on the system clock).
In the special case of zero usec_to_wait, the function should return
immediately rather than spend time on calculations.
For example, in spi driver (spi_context.h, _spi_context_cs_control()),
k_busy_wait() can be called with zero delay. This can increase spi
transaction time significantly.
Another improvement, is moving the start_cycles initialization
before cycles_to_wait calculation, so the time it takes to calculate
cycles_to_wait will be taken into account.

Signed-off-by: David Komel <a8961713@gmail.com>
2020-10-14 14:52:06 -04:00
Peter Bigot
923caadf3b kernel: delayed_work: update k_delayed_work_cancel documentation
Although the documentation states that only work items that have not
completed the delay can be cancelled, in fact pending items can also
be removed from a work queue.  Document that behavior

Also document the specific return value that will be obtained based on
the state of the work item at the time cancellation was attempted
using the current implementation.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-10-09 11:48:00 +02:00
Peter Bigot
f69f935865 kernel: delayed_work: fix race condition in k_delayed_work_cancel
The implementation checks the work queue pointer before taking a lock
and entering code that, depending on build options, may fault if that
pointer is null.  Move the check under the lock to ensure condition is
preserved.  Then the check in work_cancel is no longer necessary since
condition is verified under lock at all call sites.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-10-09 11:48:00 +02:00
Jennifer Williams
4d33007486 kernel: stack: fix stack_push spinlock and return
The z_impl_k_stack_push() has a spinlock that is set after
stack member access, which could cause race conditions and
used multiple return statements. This commit
- moves the lock before the CHECKIF
- implements goto for flow of lock, reschedule, and unlock
- uses ret for single return at the end

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2020-10-07 17:10:36 -04:00
Peter A. Bigot
f1b86caff3 kernel: timer: update k_timer API for const correctness
API that takes k_timer structures but doesn't change data in them is
updated to const-qualify the underlying object, allowing information
to be retrieved from contexts where the containing object is
immutable.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2020-10-02 11:29:14 +02:00
Peter A. Bigot
16a4081520 kernel: timer: update _timeout API for const correctness
API that takes _timeout structures but doesn't change data in them is
updated to const-qualify the underlying object, allowing information
to be retrieved from contexts where the containing object is
immutable.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2020-10-02 11:29:14 +02:00
Aastha Grover
83b9f69755 code-guideline: Fixing code violation 10.4 Rule
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.

Changes are related to converting the integer constants to the
unsigned integer constants

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-10-01 17:13:29 -04:00
Andrew Boie
e0ca403f4c kernel: add assert for mis-used k_thread_create()
k_thread_create() works as expected on both uninitialized memory,
or threads that have completely exited.

However, horrible and difficult to comprehend things can happen if a
thread object is already being used by the kernel and
k_thread_create() is called on it.

Historically this has been a problem with test cases trying to be
parsimonious with thread objects and not properly cleaning up
after themselves. Add an assertion for this which should catch
both the illegal creation of a thread already active, or threads
racing to create the same thread object.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:11:59 -04:00
Andrew Boie
f5a7e1a108 kernel: handle thread self-aborts on idle thread
Fixes races where threads on another CPU are joining the
exiting thread, since it could still be running when
the joiners wake up on a different CPU.

Fixes problems where the thread object is still being
used by the kernel when the fn_abort() function is called,
preventing the thread object from being recycled or
freed back to a slab pool.

Fixes a race where a thread is aborted from one CPU while
it self-aborts on another CPU, that was currently worked
around with a busy-wait.

Precedent for doing this comes from FreeRTOS, which also
performs final thread cleanup in the idle thread.

Some logic in z_thread_single_abort() rearranged such that
when we release sched_spinlock, the thread object pointer
is never dereferenced by the kernel again; join waiters
or fn_abort() logic may free it immediately.

An assertion added to z_thread_single_abort() to ensure
it never gets called with thread == _current outside of an ISR.

Some logic has been added to ensure z_thread_single_abort()
tasks don't run more than once.

Fixes: #26486
Related to: #23063 #23062

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:11:59 -04:00
Carles Cufi
28cb9dab64 kernel: Deprecate CONFIG_MULTITHREADING
Deprecate the Kconfig option for the time being. Unless a contributor
volunteers to take over the work to maintain the option, it will be
removed after 2 releases.

Relates to #27415.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-09-23 15:50:32 -05:00
Lauren Murphy
f29a2d1ccc doc: Clarify semantics of k_msgq_put
Amend Doxygen documentation for k_msgq_put to note that the message
content will not be modified as a result of the function call and
constify data parameter in function prototype.

Fixes #22301

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2020-09-23 13:21:07 -05:00
Daniel Leung
b5c2ff9397 kernel: add kconfig CONFIG_KERNEL_MEM_POOL
Adds a kconfig CONFIG_KERNEL_MEM_POOL to decide whether
kernel memory pool related code is compiled. This option
can be disabled to shrink code size. If k_heap is not
being used at all, kheap.c will also not be compiled,
resulting in further smaller code size.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-19 05:49:13 -04:00
Peter Bigot
332b7df61b kernel: avoid implementation-defined behavior in timeout calculation
When to->dticks is an int64_t it may happen that the calculated
remaining time is a value that cannot be exactly represented in the
destination int32_t, producing an implementation-defined result which
can include a signal (interrupt).  Cap the maximum delay to the
largest value suported by the int32_t result.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-09-17 22:19:35 -04:00
Watson Zeng
37f75d2d1f kernel: sched: bug fix for trace and monitor
sys_trace_thread_abort and z_thread_monitor_exit in
z_thread_single_abort also need to be protected by
sched_spinlock, otherwise when after the spinlock
release, if there is an pending interrupt, it will cause an
sched in the interrrupt exit, and those trace and monitor
function will never reach.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-09-17 09:30:22 +02:00
Andrew Boie
a8775ab8cb sched: don't use local lock in z_tick_sleep()
We're modifying thread_state. Use sched_spinlock instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-16 13:11:12 -05:00
Andrew Boie
8e0f6a5936 sched: hold spinlock in z_time_slice()
We are checking thread->base members like thread_state and prio
and making decisions based on it, hold the sched_spinlock to
avoid potential concurrency problems if these members are modified
on another CPU or nested interrupt.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-16 13:11:12 -05:00
Andrew Boie
83d7770de4 sched: check if runnable in sliceable()
We need to check if a thread is runnable at all before we
contemplate putting it on the end of the priority queue,
it might not be on the queue at all if it was suspended.

Replaces the less comprehensive check to see if the thread
was pending a timeout.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-16 13:11:12 -05:00
Andrew Boie
ffc5bdffbb sched: hold spinlock in z_thread_timeout()
We are checking and modifying members of thread->base
(in particular it's waitq and thread_state) which are
nominally protected by sched_spinlock. Hold it while
doing this to avoid concurrent changes on another CPU
or ISR preeemption.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-16 13:11:12 -05:00
Peter Bigot
1a7cd6ddbd kernel: device: invert sense of ready bit
The device status ready bit replaced the previous hack of clearing the
device API pointer when initialization of the device failed.  It
inadvertently changed the behavior of device_get_binding() when
invoked before the device was initialized: previously that succeeded
for uninitialized devices, after the change it failed.

Multiple driver initializations rely on being able to get a device
pointer for something they're going to depend on in their init
function, even if that device has not yet been initialized.  Although
this is wrong, and would cause faults if the device failed to
initialize before use, in practice it has been working.

It's not feasible to identify all the situations where this has
occurred, nor to add code to diagnose such cases without changing the
state associated with a device to distinguish initialized from
initialization success/failure.  Restore the previous behavior until a
more holistic solution is developed.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-09-15 18:22:38 +02:00
Anas Nashif
6e27478c3d benchmarking: remove execution benchmarking code
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.

For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.

Furthermore, much of the assembly code used had issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Daniel Leung
0ffcfa9633 timing: introduce timing functions as a generic feature
Add timing functions and APIs.  This is now used with some of the tests
we have for performance and metrics and will be used whereever timing
informations are needed, for example for tracing, profiling and other
operations where timing info is critical.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-09-05 13:28:38 -05:00
Watson Zeng
1dddbecb35 tracing: swap: bug fix and enhancement for ARC
* Move switched_in into the arch context switch assembly code,
  which will correctly record the switched_in information.

* Add switched_in/switched_out for context switch in irq exit.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-09-03 21:54:15 +02:00
Andrew Boie
5e0b55c30e kernel: demote k_mem_map to z_mem_map
Memory mapping, for now, will be a private kernel API
and is not intended to be application-facing at this time.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Andrew Boie
7d32e9f9a5 mmu: support only identity RAM mapping
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.

We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.

CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.

Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Flavio Ceolin
5408f3102d debug: x86: Add gdbstub for X86
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:

1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
    set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host

The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.

This initial work supports only X86 using uart as backend.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-09-02 20:54:57 -04:00
Andrew Boie
ffc1da08f9 kernel: add z_thread_single_abort to private hdr
We shouldn't be copy-pasting extern declarations like this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Andrew Boie
3425c32328 kernel: move stuff into z_thread_single_abort()
The same code was being copypasted in k_thread_abort()
implementations, just move into z_thread_single_abort().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Tomasz Bursztyka
ae5a761f85 kernel: Apply IRQ offload API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
e18fcbba5a device: Const-ify all device driver instance pointers
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.

A coccinelle rule is used for this:

@r_const_dev_1
  disable optional_qualifier
@
@@
-struct device *
+const struct device *

@r_const_dev_2
 disable optional_qualifier
@
@@
-struct device * const
+const struct device *

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Peter Bigot
2fcf76219e userspace: update k_object API to support immutable objects
The k_object API associates mutable state structures with known kernel
objects to support userspace.  The kernel objects themselves are not
modified by the API, and in some cases (e.g. device structures) may be
const-qualified.  Update the API so that pointers to these const
kernel objects can be passed without casting away the const qualifier.

Fixes #27399

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
aac9e2c5e3 device: Revise how initialization status is being handled
In order to make all device instances constant, driver_api pointer is
not set to NULL anymore if initialization failed.

Instead, have a bitfield dedicated to it.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Spoorthy Priya Yerabolu
9247e8bc44 code-guideline: Tag name should be a unique identifier
Following are the changes to variable names that are matching
with tag names (Rule 5.7 violations)

In kernel.h, event_type is matching with a tag name in
lib/os/onoff.c. Added a _ prefix to event_type and
also to the macro argument names.

In userspace.c, *dyn_obj is matching with the tag name
dyn_obj in the file itslef. Changed it to dyn

In device.h, device_mmio.h, init.h and init.c,
changed the *device to dev. Except for one change in
init.h

Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
2020-09-01 08:03:23 -04:00
Andrew Boie
00f71b0d63 kernel: add CONFIG_ARCH_MEM_DOMAIN_SYNCHRONOUS_API
Saves us a few bytes of program text on arches that don't need
these implemented, currently all uniprocessor MPU-based systems.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie
d11e3c3456 userspace: add debug logs to mem domain code
Useful when debugging stuff.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie
9bfc8d82d0 userspace: introduce default memory domain
We make a policy change here: all threads are members of a
memory domain, never NULL. We introduce a default memory domain
for threads that haven't been assigned to or inherited another one.

Primary motivation for this change is better MMU support, as
one common configuration will be to maintain page tables at
the memory domain level.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie
e433494f40 kernel: mmu: implement virtual mappings
These will grow downward from CONFIG_KERNEL_VM_LIMIT.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Daniel Leung
49206a86ff debug/coredump: add a primitive coredump mechanism
This adds a very primitive coredump mechanism under subsys/debug
where during fatal error, register and memory content can be
dumped to coredump backend. One such backend utilizing log
module for output is included. Once the coredump log is converted
to a binary file, it can be used with the ELF output file as
inputs to an overly simplified implementation of a GDB server.
This GDB server can be attached via the target remote command of
GDB and will be serving register and memory content. This allows
using GDB to examine stack and memory where the fatal error
occurred.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Anas Nashif
390537bf68 tracing: trace mutex/semaphore using dedicated calls
Instead of using generic trace calls, use dedicated functions for
tracing sempahores and mutexes.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
d1049dc258 tracing: swap: cleanup trace points and their location
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
5c31d00a6a tracing: trace k_sleep
Trace when k_sleep is called.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
3b75a08e3f Kconfig: move power management Kconfig into subsys/power
Consolidate all PM related Kconfigs in one place.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 10:24:30 +02:00
Andrew Boie
72792a5e63 userspace: cleanup mem domain assertions
The memory domain APIs document that overlapping regions are
not allowed, check for this unconditionally.

Cleanup assertion error messages. Use __ASSERT_NO_MSG for
blindingly obvious NULL checks.

We now have a `check_add_partition()` function to perform
all the necessary sanity checks on adding a partition to a
domain. This returns true or false, which will help later
when we implement #24609

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-20 13:58:54 -04:00
Andrew Boie
bc35b0b780 userspace: add optional member to memory domains
Some systems may need to associate arch-specific data to
a memory domain. Add a Kconfig and `arch` field for this,
and a new arch API to initialize it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-20 13:58:54 -04:00
Andrew Boie
282cd932c8 kernel: don't reschedule thread abort in ISR
Zephyr has a policy of invoking the scheduler on the way
out of IRQs anyway for all arches, this is unnecessary.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-18 08:36:35 +02:00
David Leach
1ab90044f2 kernel/timeout: Fix coverity warning CID 211045
The dticks field was changed from a signed to an unsigned
value so now the assert test to ensure it isn't negative
is no longer needed.

fixes #26355

Signed-off-by: David Leach <david.leach@nxp.com>
2020-08-16 09:29:41 -04:00
Anas Nashif
379b93f0d3 kernel: do not call swap if next ready thread is current thread
Check if next ready thread is same as current thread before calling
z_swap.
This avoids calling swap to just go back to the original thread.

Original code: thread 0x20000118 switches out and then in again...

>> 0x20000118 gives semaphore(signal): 0x20000104 (count: 0)
>> thread ready: 0x20000118
>> 0x20000118 switched out
>> 0x20000118 switched in
>> end call to k_sem_give
>> 0x20000118 takes semaphore(wait): 0x200000f4 (count: 0)
>> thread pend: 0x20000118
>> 0x20000118 switched out
>> 0x200001d0 switched in

with this patch:

>> 0x200001d0 gives semaphore(signal): 0x200000f4 (count: 0)
>> thread ready: 0x200001d0
>> end call to k_sem_give
>> 0x200001d0 takes semaphore(wait): 0x20000104 (count: 0)
>> thread pend: 0x200001d0
>> 0x200001d0 switched out
>> 0x20000118 switched in
>> end call to k_sem_take

The above is output from tracing with a custom format used for
debugging.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-12 15:32:29 -04:00
Tomasz Bursztyka
98d9b01322 device: Apply driver_api/data attributes rename everywhere
Via coccinelle:

@r_device_driver_api_and_data_1@
struct device *D;
@@
(
D->
-	driver_api
+	api
|
D->
-	driver_data
+	data
)

@r_device_driver_api_and_data_2@
expression E;
@@
(
net_if_get_device(E)->
-	driver_api
+	api
|
net_if_get_device(E)->
-	driver_data
+	data
)

And grep/sed rules for macros:

git grep -rlz 'dev)->driver_data' |
	xargs -0 sed -i 's/dev)->driver_data/dev)->data/g'

git grep -rlz 'dev->driver_data' |
	xargs -0 sed -i 's/dev->driver_data/dev->data/g'

git grep -rlz 'device->driver_data' |
	xargs -0 sed -i 's/device->driver_data/device->data/g'

Fixes #27397

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-08-11 19:30:53 +02:00
Ioannis Glaropoulos
60bd51aa37 kernel: init: allow custom switch to main for no-multithreading case
Certain architectures, such as ARM Cortex-M require a custom
switch to main(), in case the case we build Zephyr without
support for multithreading (CONFIG_MULTITHREADING=n). So as
to keep the change to kernel/init.c as unintrusive as
possible, we install an ARCH-specific hook in z_cstart(),
which gets called for architectures that require this
customized switch to main() function.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Torsten Rasmussen
bb3946666f cmake: fix include directories to work with out-of-tree arch
Include directories for ${ARCH} is not specified correctly.
Several places in Zephyr, the include directories are specified as:
${ZEPHYR_BASE}/arch/${ARCH}/include
the correct line is:
${ARCH_DIR}/${ARCH}/include
to correctly support out of tree archs.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2020-08-05 08:06:07 -04:00
Aastha Grover
2d9b459f15 syscalls: Add system call for cache flush & invalidate
include/cache.h: System calls declaration and implementation
kernel/cache_handlers.c: Defination of verification functions

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-08-04 17:26:45 -04:00
Joakim Andersson
ea9590448d kernel: Add k_delayed_work_pending to check if work has been submitted
Add k_delayed_work_pending similar to k_work_pending to check if the
delayed work item has been submitted but not yet completed.
This would compliment the API since using k_work_pending or
k_delayed_work_remaining_get is not enough to check this condition.
This is because the timeout could have run out, but the timeout handler
not yet processed and put the work into the workqueue.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2020-08-04 17:32:56 +02:00
Carles Cufi
244f826e3c cmake: remove _if_kconfig() functions
This set of functions seem to be there just because of historical
reasons, stemming from Kbuild. They are non-obvious and prone to errors,
so remove them in favor of the `_ifdef()` ones with an explicit
`CONFIG_` condition.

Script used:

git grep -l _if_kconfig | xargs sed -E -i
"s/_if_kconfig\(\s*(\w*)/_ifdef(CONFIG_\U\1\E \1/g"

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-08-01 12:35:20 +02:00
Andrew Boie
b4ce7a77c6 kernel: sys_workq thread stack is kernel-only
The system workqueue is a kernel thread that doesn't run
in user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
0c297461c5 kernel: idle thread stacks are kernel-only
The idle thread(s) always run in supervisor mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
8b4b0d6264 kernel: z_interrupt_stacks are now kernel stacks
This will save memory on many platforms that enable
user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
8ce260d8df kernel: introduce supervisor-only stacks
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.

Two new arch defines are introduced:

- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN

New public declaration macros:

- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF

If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.

Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
e4cc84a537 kernel: update arch_switch_to_main_thread()
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.

Based on #24467 authored by Flavio Ceolin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
b0c155f3ca kernel: overhaul stack specification
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.

thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.

thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.

CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.

thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.

Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
aa464052ff arch_interface: remove CamelCase
Naming cleanup of this interface declaration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
62eb7d99dc arch_interface: remove unnecessary params
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Daniel Leung
ac53880cef kernel: smp: avoid identifier collisions
MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope.

In the function smp_init_top(), the variable "start_flags"
collide with global variable of the same name. So rename the one
inside the function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-25 21:26:15 -04:00
Daniel Leung
b907cf7694 kernel: timeout: avoid identifier collisions
MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope.

In the function z_set_timeout_expiry(), the parameter "idle"
and an inner variable "next" collide with function named idle()
and next(). So rename those variables.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-25 21:26:15 -04:00
Daniel Leung
2acafafae1 kernel: pipes: rename inner spinlock keys
MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope. There are a few spinlock
keys simply named "key". So rename those in inner scope to avoid
shadowing the outer ones.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-25 21:26:15 -04:00
David Leach
db36b146e1 kernel: Remove supported size comments from HEAP_MEM_POOL_SIZE
The mempool implementation doesn't require specific sizes and can
support arbitrary sizes up to the limit of available memory. The
Kconfig documentation on this configuration was confusing user.

Fixes #20418

Signed-off-by: David Leach <david.leach@nxp.com>
2020-07-24 10:19:47 +02:00
Andrew Boie
476fc405e7 kernel: mem_domain: centralize assertions
Later this year I hope to overhaul the memory domain APIs,
but at least for now let's at least consolidate these checks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-20 15:32:16 +02:00
Andrew Boie
06cf6d27f7 kernel: add k_mem_map() and related defines
This will be the interface for mapping memory in the kernel's
part of the address space, which is guaranteed to be persistent
regardless of what thread is scheduled.

Further code for specifically managing virtual memory will end up in
kernel/mmu.c.

Further defintions for memory management in general will end up
in sys/mem_manage.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
9ff148ac83 kernel: define arch_mem_map()
This is the low-level arch function to map a region into page
tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Anas Nashif
568211738d doc: replace lifo/fifo with LIFO/FIFO
Replace all occurances of lifo/fifo with LIFO/FIFO to be consistent.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-07-15 14:01:33 -04:00
Flavio Ceolin
c4f7faea10 random: Include header where it is used
Unit tests were failing to build because random header was included by
kernel_includes.h. The problem is that rand32.h includes a generated
file that is either not generated or not included when building unit
tests. Also, it is better to limit the scope of this file to where it is
used.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-07-08 21:05:36 -04:00
Enjia Mai
7ac40aabc0 tests: adding test cases for arch-dependent SMP function
Add one another test case for testing both arch_curr_cpu() and
arch_sched_ipi() architecture layer interface.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2020-07-02 08:42:53 -04:00
Anas Nashif
8aa3dc340d kernel: cleanup header inclusion
Remove unused header inclusion in kernel code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-06-25 16:12:36 -05:00
Anas Nashif
2c5d40437b kernel: logging: convert K_DEBUG to LOG_DBG
Move K_DEBUG to use LOG_DBG instead of plain printk.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-06-25 16:12:36 -05:00
Peter Bigot
d8b86cba3c device: add API to check whether a device is ready to use
Currently this is useful only for some internal applications that
iterate over the device table, since applications can't get access to
a device that isn't ready, and devices can't be made unready.  So it's
introduced as internal API that may be exposed as device_ready() when
those conditions change.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-06-23 13:27:14 +02:00
Peter Bigot
219a3ca96d device: provide internal access to static device array
Device objects in Zephyr are currently placed into an array by linker
scripts, making it easy to iterate over all devices if the array
address and size can be obtained.  This has applications in device
power management, but the existing API for this was available only
when that feature was enabled.  It also uses a signed type to hold the
device count.

Provide a new API that is generally available, but marked as internal
since normally applications should not iterate over all devices.  Mark
the PM API approach deprecated.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-06-23 13:27:14 +02:00
Andrew Boie
4855eaa735 kernel: document arch_printk_char_out()
Used by very early console drivers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-17 09:20:55 +02:00
Ioannis Glaropoulos
8ada29e06c kernel: userspace: fix variable initialization
Need to initialize tidx before using it, to
supress compiler warning.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-06-16 10:50:27 -05:00
Kumar Gala
a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andrew Boie
be919d3bf7 userspace: improve dynamic object allocation
We now have a low-level function z_dynamic_object_create()
which is not a system call and is used for installing
kernel objects that are not supported by k_object_alloc().

Checking for valid object type enumeration values moved
completely to the implementation function.

A few debug messages and comments were improved.

Futexes and sys_mutexes are now properly excluded from
dynamic generation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-03 22:33:32 +02:00
Andrew Boie
378024c510 userspace: add z_is_in_user_syscall()
Certain types of system call validation may need to be pushed
deeper in the implementation and not performed in the verification
function. If such checks are only pertinent when the caller was
from user mode, we need an API to detect this situation.

This is implemented by having thread->syscall_frame be non-NULL
only while a user system call is in progress. The template for the
system call marshalling functions is changed to clear this value
on exit.

A test is added to prove that this works.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-03 22:33:32 +02:00
Andrew Boie
64c8189ab0 userspace: fix bad ssf pointer on bad/no syscall
This was passing along _current->ssf, but these types of bad
syscalls do not go through the z_mrsh mechanism and was
passing stale data.

We have the syscall stack frame already as an argument,
propagate that so it works properly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-03 22:33:32 +02:00
Andy Ross
99c2d2d047 kernel/queue: Remove interior use of k_poll()
The k_queue data structure, when CONFIG_POLL was enabled, would
inexplicably use k_poll() as its blocking mechanism instead of the
original wait_q/pend() code.  This was actually racy, see commit
b173e4353f.  The code was structured as a condition variable: using
a spinlock around the queue data before deciding to block.  But unlike
pend_current_thread(), k_poll() cannot atomically release a lock.

A workaround had been in place for this, and then accidentally
reverted (both by me!) because the code looked "wrong".

This is just fragile, there's no reason to have two implementations of
k_queue_get().  Remove.

Note that this also removes a test case in the work_queue test where
(when CONFIG_POLL was enabled, but not otherwise) it was checking for
the ability to immediately cancel a delayed work item that was
submitted with a timeout of K_NO_WAIT (i.e. "queue it immediately").
This DOES NOT work with the origina/non-poll queue backend, and has
never been a documented behavior of k_delayed_work_submit_to_queue()
under any circumstances.  I don't know why we were testing this.

Fixes #25904

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-06-03 01:47:41 +02:00
Andy Ross
a343cf9480 kernel/timer: Handle K_FOREVER in k_timer_start()
The possibility of passing K_FOREVER as the initial duration argument
to k_timer_start() wasn't being handled, with the result that the
computed value became an zero timeout (effecitvely treating it as
K_NO_WAIT and firing at the next tick).

This is obviously pathlogical, but it should still do what the code
says it should and wait forever.

Make k_timer_start(..., K_FOREVER, ...) a noop.

Fixes #25820

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-05-29 19:59:14 +02:00
Andrew Boie
6af9793f0c kernek: don't allow mutex ops in ISRs
Mutex operations check ownership against _current. But in an
ISR, _current is just whatever thread was interrupted when the
ISR fired. Explicitly do not allow this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-28 01:38:05 +02:00
Peter Bigot
ae935dccdf device_pm: correct nop documented behavior
The device_pm_control_nop function is documented to always return zero
regardless of operation.  However, when device_get_power_state() is
invoked with this control function it returns success leaving the
output parameter state unmodified, which may not be a valid device
state.

Document and implement that the nop control function returns -ENOTSUP
always.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-05-21 20:32:12 +02:00