Commit Graph

2232 Commits

Author SHA1 Message Date
Daniel Leung
49206a86ff debug/coredump: add a primitive coredump mechanism
This adds a very primitive coredump mechanism under subsys/debug
where during fatal error, register and memory content can be
dumped to coredump backend. One such backend utilizing log
module for output is included. Once the coredump log is converted
to a binary file, it can be used with the ELF output file as
inputs to an overly simplified implementation of a GDB server.
This GDB server can be attached via the target remote command of
GDB and will be serving register and memory content. This allows
using GDB to examine stack and memory where the fatal error
occurred.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Anas Nashif
390537bf68 tracing: trace mutex/semaphore using dedicated calls
Instead of using generic trace calls, use dedicated functions for
tracing sempahores and mutexes.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
d1049dc258 tracing: swap: cleanup trace points and their location
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
5c31d00a6a tracing: trace k_sleep
Trace when k_sleep is called.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
3b75a08e3f Kconfig: move power management Kconfig into subsys/power
Consolidate all PM related Kconfigs in one place.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 10:24:30 +02:00
Andrew Boie
72792a5e63 userspace: cleanup mem domain assertions
The memory domain APIs document that overlapping regions are
not allowed, check for this unconditionally.

Cleanup assertion error messages. Use __ASSERT_NO_MSG for
blindingly obvious NULL checks.

We now have a `check_add_partition()` function to perform
all the necessary sanity checks on adding a partition to a
domain. This returns true or false, which will help later
when we implement #24609

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-20 13:58:54 -04:00
Andrew Boie
bc35b0b780 userspace: add optional member to memory domains
Some systems may need to associate arch-specific data to
a memory domain. Add a Kconfig and `arch` field for this,
and a new arch API to initialize it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-20 13:58:54 -04:00
Andrew Boie
282cd932c8 kernel: don't reschedule thread abort in ISR
Zephyr has a policy of invoking the scheduler on the way
out of IRQs anyway for all arches, this is unnecessary.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-18 08:36:35 +02:00
David Leach
1ab90044f2 kernel/timeout: Fix coverity warning CID 211045
The dticks field was changed from a signed to an unsigned
value so now the assert test to ensure it isn't negative
is no longer needed.

fixes #26355

Signed-off-by: David Leach <david.leach@nxp.com>
2020-08-16 09:29:41 -04:00
Anas Nashif
379b93f0d3 kernel: do not call swap if next ready thread is current thread
Check if next ready thread is same as current thread before calling
z_swap.
This avoids calling swap to just go back to the original thread.

Original code: thread 0x20000118 switches out and then in again...

>> 0x20000118 gives semaphore(signal): 0x20000104 (count: 0)
>> thread ready: 0x20000118
>> 0x20000118 switched out
>> 0x20000118 switched in
>> end call to k_sem_give
>> 0x20000118 takes semaphore(wait): 0x200000f4 (count: 0)
>> thread pend: 0x20000118
>> 0x20000118 switched out
>> 0x200001d0 switched in

with this patch:

>> 0x200001d0 gives semaphore(signal): 0x200000f4 (count: 0)
>> thread ready: 0x200001d0
>> end call to k_sem_give
>> 0x200001d0 takes semaphore(wait): 0x20000104 (count: 0)
>> thread pend: 0x200001d0
>> 0x200001d0 switched out
>> 0x20000118 switched in
>> end call to k_sem_take

The above is output from tracing with a custom format used for
debugging.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-12 15:32:29 -04:00
Tomasz Bursztyka
98d9b01322 device: Apply driver_api/data attributes rename everywhere
Via coccinelle:

@r_device_driver_api_and_data_1@
struct device *D;
@@
(
D->
-	driver_api
+	api
|
D->
-	driver_data
+	data
)

@r_device_driver_api_and_data_2@
expression E;
@@
(
net_if_get_device(E)->
-	driver_api
+	api
|
net_if_get_device(E)->
-	driver_data
+	data
)

And grep/sed rules for macros:

git grep -rlz 'dev)->driver_data' |
	xargs -0 sed -i 's/dev)->driver_data/dev)->data/g'

git grep -rlz 'dev->driver_data' |
	xargs -0 sed -i 's/dev->driver_data/dev->data/g'

git grep -rlz 'device->driver_data' |
	xargs -0 sed -i 's/device->driver_data/device->data/g'

Fixes #27397

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-08-11 19:30:53 +02:00
Ioannis Glaropoulos
60bd51aa37 kernel: init: allow custom switch to main for no-multithreading case
Certain architectures, such as ARM Cortex-M require a custom
switch to main(), in case the case we build Zephyr without
support for multithreading (CONFIG_MULTITHREADING=n). So as
to keep the change to kernel/init.c as unintrusive as
possible, we install an ARCH-specific hook in z_cstart(),
which gets called for architectures that require this
customized switch to main() function.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Torsten Rasmussen
bb3946666f cmake: fix include directories to work with out-of-tree arch
Include directories for ${ARCH} is not specified correctly.
Several places in Zephyr, the include directories are specified as:
${ZEPHYR_BASE}/arch/${ARCH}/include
the correct line is:
${ARCH_DIR}/${ARCH}/include
to correctly support out of tree archs.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2020-08-05 08:06:07 -04:00
Aastha Grover
2d9b459f15 syscalls: Add system call for cache flush & invalidate
include/cache.h: System calls declaration and implementation
kernel/cache_handlers.c: Defination of verification functions

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-08-04 17:26:45 -04:00
Joakim Andersson
ea9590448d kernel: Add k_delayed_work_pending to check if work has been submitted
Add k_delayed_work_pending similar to k_work_pending to check if the
delayed work item has been submitted but not yet completed.
This would compliment the API since using k_work_pending or
k_delayed_work_remaining_get is not enough to check this condition.
This is because the timeout could have run out, but the timeout handler
not yet processed and put the work into the workqueue.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2020-08-04 17:32:56 +02:00
Carles Cufi
244f826e3c cmake: remove _if_kconfig() functions
This set of functions seem to be there just because of historical
reasons, stemming from Kbuild. They are non-obvious and prone to errors,
so remove them in favor of the `_ifdef()` ones with an explicit
`CONFIG_` condition.

Script used:

git grep -l _if_kconfig | xargs sed -E -i
"s/_if_kconfig\(\s*(\w*)/_ifdef(CONFIG_\U\1\E \1/g"

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-08-01 12:35:20 +02:00
Andrew Boie
b4ce7a77c6 kernel: sys_workq thread stack is kernel-only
The system workqueue is a kernel thread that doesn't run
in user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
0c297461c5 kernel: idle thread stacks are kernel-only
The idle thread(s) always run in supervisor mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
8b4b0d6264 kernel: z_interrupt_stacks are now kernel stacks
This will save memory on many platforms that enable
user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
8ce260d8df kernel: introduce supervisor-only stacks
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.

Two new arch defines are introduced:

- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN

New public declaration macros:

- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF

If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.

Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
e4cc84a537 kernel: update arch_switch_to_main_thread()
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.

Based on #24467 authored by Flavio Ceolin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
b0c155f3ca kernel: overhaul stack specification
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.

thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.

thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.

CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.

thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.

Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
aa464052ff arch_interface: remove CamelCase
Naming cleanup of this interface declaration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
62eb7d99dc arch_interface: remove unnecessary params
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Daniel Leung
ac53880cef kernel: smp: avoid identifier collisions
MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope.

In the function smp_init_top(), the variable "start_flags"
collide with global variable of the same name. So rename the one
inside the function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-25 21:26:15 -04:00
Daniel Leung
b907cf7694 kernel: timeout: avoid identifier collisions
MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope.

In the function z_set_timeout_expiry(), the parameter "idle"
and an inner variable "next" collide with function named idle()
and next(). So rename those variables.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-25 21:26:15 -04:00
Daniel Leung
2acafafae1 kernel: pipes: rename inner spinlock keys
MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope. There are a few spinlock
keys simply named "key". So rename those in inner scope to avoid
shadowing the outer ones.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-25 21:26:15 -04:00
David Leach
db36b146e1 kernel: Remove supported size comments from HEAP_MEM_POOL_SIZE
The mempool implementation doesn't require specific sizes and can
support arbitrary sizes up to the limit of available memory. The
Kconfig documentation on this configuration was confusing user.

Fixes #20418

Signed-off-by: David Leach <david.leach@nxp.com>
2020-07-24 10:19:47 +02:00
Andrew Boie
476fc405e7 kernel: mem_domain: centralize assertions
Later this year I hope to overhaul the memory domain APIs,
but at least for now let's at least consolidate these checks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-20 15:32:16 +02:00
Andrew Boie
06cf6d27f7 kernel: add k_mem_map() and related defines
This will be the interface for mapping memory in the kernel's
part of the address space, which is guaranteed to be persistent
regardless of what thread is scheduled.

Further code for specifically managing virtual memory will end up in
kernel/mmu.c.

Further defintions for memory management in general will end up
in sys/mem_manage.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
9ff148ac83 kernel: define arch_mem_map()
This is the low-level arch function to map a region into page
tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Anas Nashif
568211738d doc: replace lifo/fifo with LIFO/FIFO
Replace all occurances of lifo/fifo with LIFO/FIFO to be consistent.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-07-15 14:01:33 -04:00
Flavio Ceolin
c4f7faea10 random: Include header where it is used
Unit tests were failing to build because random header was included by
kernel_includes.h. The problem is that rand32.h includes a generated
file that is either not generated or not included when building unit
tests. Also, it is better to limit the scope of this file to where it is
used.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-07-08 21:05:36 -04:00
Enjia Mai
7ac40aabc0 tests: adding test cases for arch-dependent SMP function
Add one another test case for testing both arch_curr_cpu() and
arch_sched_ipi() architecture layer interface.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2020-07-02 08:42:53 -04:00
Anas Nashif
8aa3dc340d kernel: cleanup header inclusion
Remove unused header inclusion in kernel code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-06-25 16:12:36 -05:00
Anas Nashif
2c5d40437b kernel: logging: convert K_DEBUG to LOG_DBG
Move K_DEBUG to use LOG_DBG instead of plain printk.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-06-25 16:12:36 -05:00
Peter Bigot
d8b86cba3c device: add API to check whether a device is ready to use
Currently this is useful only for some internal applications that
iterate over the device table, since applications can't get access to
a device that isn't ready, and devices can't be made unready.  So it's
introduced as internal API that may be exposed as device_ready() when
those conditions change.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-06-23 13:27:14 +02:00
Peter Bigot
219a3ca96d device: provide internal access to static device array
Device objects in Zephyr are currently placed into an array by linker
scripts, making it easy to iterate over all devices if the array
address and size can be obtained.  This has applications in device
power management, but the existing API for this was available only
when that feature was enabled.  It also uses a signed type to hold the
device count.

Provide a new API that is generally available, but marked as internal
since normally applications should not iterate over all devices.  Mark
the PM API approach deprecated.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-06-23 13:27:14 +02:00
Andrew Boie
4855eaa735 kernel: document arch_printk_char_out()
Used by very early console drivers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-17 09:20:55 +02:00
Ioannis Glaropoulos
8ada29e06c kernel: userspace: fix variable initialization
Need to initialize tidx before using it, to
supress compiler warning.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-06-16 10:50:27 -05:00
Kumar Gala
a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andrew Boie
be919d3bf7 userspace: improve dynamic object allocation
We now have a low-level function z_dynamic_object_create()
which is not a system call and is used for installing
kernel objects that are not supported by k_object_alloc().

Checking for valid object type enumeration values moved
completely to the implementation function.

A few debug messages and comments were improved.

Futexes and sys_mutexes are now properly excluded from
dynamic generation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-03 22:33:32 +02:00
Andrew Boie
378024c510 userspace: add z_is_in_user_syscall()
Certain types of system call validation may need to be pushed
deeper in the implementation and not performed in the verification
function. If such checks are only pertinent when the caller was
from user mode, we need an API to detect this situation.

This is implemented by having thread->syscall_frame be non-NULL
only while a user system call is in progress. The template for the
system call marshalling functions is changed to clear this value
on exit.

A test is added to prove that this works.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-03 22:33:32 +02:00
Andrew Boie
64c8189ab0 userspace: fix bad ssf pointer on bad/no syscall
This was passing along _current->ssf, but these types of bad
syscalls do not go through the z_mrsh mechanism and was
passing stale data.

We have the syscall stack frame already as an argument,
propagate that so it works properly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-03 22:33:32 +02:00
Andy Ross
99c2d2d047 kernel/queue: Remove interior use of k_poll()
The k_queue data structure, when CONFIG_POLL was enabled, would
inexplicably use k_poll() as its blocking mechanism instead of the
original wait_q/pend() code.  This was actually racy, see commit
b173e4353f.  The code was structured as a condition variable: using
a spinlock around the queue data before deciding to block.  But unlike
pend_current_thread(), k_poll() cannot atomically release a lock.

A workaround had been in place for this, and then accidentally
reverted (both by me!) because the code looked "wrong".

This is just fragile, there's no reason to have two implementations of
k_queue_get().  Remove.

Note that this also removes a test case in the work_queue test where
(when CONFIG_POLL was enabled, but not otherwise) it was checking for
the ability to immediately cancel a delayed work item that was
submitted with a timeout of K_NO_WAIT (i.e. "queue it immediately").
This DOES NOT work with the origina/non-poll queue backend, and has
never been a documented behavior of k_delayed_work_submit_to_queue()
under any circumstances.  I don't know why we were testing this.

Fixes #25904

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-06-03 01:47:41 +02:00
Andy Ross
a343cf9480 kernel/timer: Handle K_FOREVER in k_timer_start()
The possibility of passing K_FOREVER as the initial duration argument
to k_timer_start() wasn't being handled, with the result that the
computed value became an zero timeout (effecitvely treating it as
K_NO_WAIT and firing at the next tick).

This is obviously pathlogical, but it should still do what the code
says it should and wait forever.

Make k_timer_start(..., K_FOREVER, ...) a noop.

Fixes #25820

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-05-29 19:59:14 +02:00
Andrew Boie
6af9793f0c kernek: don't allow mutex ops in ISRs
Mutex operations check ownership against _current. But in an
ISR, _current is just whatever thread was interrupted when the
ISR fired. Explicitly do not allow this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-28 01:38:05 +02:00
Peter Bigot
ae935dccdf device_pm: correct nop documented behavior
The device_pm_control_nop function is documented to always return zero
regardless of operation.  However, when device_get_power_state() is
invoked with this control function it returns success leaving the
output parameter state unmodified, which may not be a valid device
state.

Document and implement that the nop control function returns -ENOTSUP
always.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-05-21 20:32:12 +02:00
Andrew Boie
743ff98f51 kernel: wipe TLS before dropping to user mode
Ensures that TLS from when the thread was in supervisor mode
is erased, rather than rely on the arch code to do it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-13 22:02:48 +02:00
Andrew Boie
468efadd47 kernel: simplify dummy thread implementation
- simplify dummy thread initialization to a kswap.h
  inline function

- use the same inline function for both early boot and
  SMP setup

- add a note on necessity of the dummy thread even if
  a custom swap to main is implemented

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-13 21:23:52 +02:00
Martí Bolívar
6e8775ff84 devicetree: remove DT_HAS_NODE_STATUS_OKAY
Several reviewers agreed that DT_HAS_NODE_STATUS_OKAY(...) was an
undesirable API for the following reasons:

- it's inconsistent with the rest of the DT_NODE_HAS_FOO names
- DT_NODE_HAS_FOO_BAR_BAZ(node) was agreed upon as a shorthand
  for macros which are equivalent to
  DT_NODE_HAS_FOO(node) && DT_NODE_HAS_BAR(node) &&
- DT_NODE_HAS_BAZ(node), and DT_HAS_NODE_STATUS_OKAY is an odd duck
- DT_NODE_HAS_STATUS(..., okay) was viewed as more readable anyway
- it is seen as a somewhat aesthetically challenged name

Replace all users with DT_NODE_HAS_STATUS(..., okay), which is
semantically equivalent.

This is mostly done with sed, but a few remaining cases were done by
hand, along with whitespace, docs, and comment changes. These special
cases include the Nordic SOC static assert files.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-05-13 18:24:42 +02:00
Peter Bigot
c308f8069f kernel: init: move C++ initialization before application init loop
C++ is documented to be supported in applications, so it should be
supported in SYS_INIT() functions run at the application init level.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-05-09 11:42:20 -04:00
Tomasz Bursztyka
8d7bb8ffd8 device: Refactor device structures
When the device driver model got introduced, there were no concept of
SYS_INIT() which can be seen as software service. These were introduced
afterwards and reusing the device infrastructure for simplicity.
However, it meant to allocate a bit too much for something that only
required an initialization function to be called at right time.

Thus refactoring the devices structures relevantly:
- introducing struct init_entry which is a generic init end-point
- struct deviceconfig is removed and struct device owns everything now.
- SYS_INIT() generates only a struct init_entry via calling
  INIT_ENTRY_DEFINE()
- DEVICE_AND_API_INIT() generates a struct device and calls
  INIT_ENTRY_DEFINE()
- init objects sections are in ROM
- device objects sections are in RAM (but will end up in ROM once they
  will be 'constified')

It also generate a tiny memory gain on both ROM and RAM, which is nice.

Perhaps kernel/device.c could be renamed to something more relevant.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-05-08 23:07:44 +02:00
Andrew Boie
c24673eefc kernel: properly name idle threads
These are now indexed by CPU.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-08 17:44:28 +02:00
Andrew Boie
a203d21962 kernel: remove legacy fields in _kernel
UP should just use _kernel.cpus[0].

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-08 17:42:49 +02:00
Stephanos Ioannidis
aaf93205bb kconfig: Rename CONFIG_FP_SHARING to CONFIG_FPU_SHARING
This commit renames the Kconfig `FP_SHARING` symbol to `FPU_SHARING`,
since this symbol specifically refers to the hardware FPU sharing
support by means of FPU context preservation, and the "FP" prefix is
not fully descriptive of that; leaving room for ambiguity.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-05-08 10:58:33 +02:00
Christopher Friedt
3315f8fecf kernel: pipe: read_avail / write_avail syscalls
These helper functions may clarify how much data is available to read
from or write to the circular buffer of a k_pipe.

Fixes #25036

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2020-05-07 19:39:53 +02:00
Kumar Gala
fdd85d5ad7 dts: Rename DT_HAS_NODE macro to DT_HAS_NODE_STATUS_OKAY
Rename DT_HAS_NODE to DT_HAS_NODE_STATUS_OKAY so the semantics are
clear.  As going forward DT_HAS_NODE will report if a NODE exists
regardless of its status.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-05-06 05:25:41 -05:00
Andy Ross
4d417034e9 kernel/Kconfig: Add prompt for CONFIG_TIMEOUT_64BIT
This was missing a prompt string, causing recent kconfig logic to
throw an error if an app tried to set it directly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-05-06 06:05:03 -04:00
Andrew Boie
f1b5d9db8e kernel: fix issue with k_thread_join() timeouts
If k_thread_join() was passed with an actual timeout value,
and not K_FOREVER, the blocking thread was not being properly
woken up when the target thread exits. The timeout itself
was never aborted, causing the joining thread to remain
un-scheduled until the timeout expires.

Amend the k_thread_join() test cases to check that the join
completed before the provided timeout period expired.

Fixes: #24744

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-05 11:43:08 -07:00
Christopher Friedt
d650f4b494 kernel: pipe: fix !K_NO_WAIT and >= min_xfer bytes transferred
If timeout != K_NO_WAIT, then return immediately when not all
bytes_to_read or bytes_to_write have been transfered, but >=
min_xfer have been transferred.

Fixes #24485

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2020-04-28 16:14:55 +02:00
Joakim Andersson
aac2220d29 kernel: Revert: "Add static threads to k_thread_foreach functions"
Revert commit mistakenly iterating over static threads in
k_thread_foreach functions. The static threads where already included
in the for-loop, and is now duplicated.

This reverts commit bd3b4b0caf.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2020-04-27 19:04:02 +02:00
Stephanos Ioannidis
0e6ede8929 kconfig: Rename CONFIG_FLOAT to CONFIG_FPU
This commit renames the Kconfig `FLOAT` symbol to `FPU`, since this
symbol only indicates that the hardware Floating Point Unit (FPU) is
used and does not imply and/or indicate the general availability of
toolchain-level floating point support (i.e. this symbol is not
selected when building for an FPU-less platform that supports floating
point operations through the toolchain-provided software floating point
library).

Moreover, given that the symbol that indicates the availability of FPU
is named `CPU_HAS_FPU`, it only makes sense to use "FPU" in the name of
the symbol that enables the FPU.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-27 19:03:44 +02:00
Kumar Gala
9b4298c20d arm: Convert DT_CCM_* to new devicetree.h macros
Convert various DT_CCM_* macros to use DT_CHOSEN(zephyr_ccm) and
associated macros from devicetree.h.

We remove CCM references from cortex_a and cortex_r linker scripts as
its only a feature on Cortex-M STM32 SoCs.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-26 06:04:46 -05:00
Kumar Gala
a49817d17e arm: Convert DT_DTCM_* to new devicetree.h macros
Convert various DT_DTCM_* macros to use DT_CHOSEN(zephyr_dtcm) and
associated macros from devicetree.h.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-26 06:04:46 -05:00
Andy Ross
3e729b2b1c kernel/timer: Correctly clamp period argument
The period argument of a k_timer needs an offset of one tick from the
value computed in user code (because periods get reset from within the
ISR, see the comment above this code for an explanation).  When the
computed tick value was 1, it would become 0.  This is actually
perfectly correct as a k_timeout_t to be passed to z_add_timeout().

BUT: to k_timer's API, K_NO_WAIT means "never" (i.e. the same as
K_FOREVER) and not "as soon as possible", so the period timer would
not be reset.  This is sort of a wart, but it's the way the API has
been specified forever.

The upshot is that for the case of calling k_timer_start() with a
minimal period argument (i.e. one that produces "one tick"), the
period would be ignored and the timer would act like a one shot.  Fix
the clamp so it can't produce K_NO_WAIT.

This also adds a filter for absolute timeouts, which (while that's
sort of a pathological usage) were getting that one tick offset when
it wasn't appropriate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-24 16:38:33 +02:00
Andy Ross
12bd187003 kernel/timeout: Check for K_FOREVER in z_add_timeout()
The "forever" token has always been interpreted above z_add_timeout()
(because it's always taken ticks, but K_FOREVER used to be in ms).
But it was discovered that k_delayed_work_submit_to_queue() was never
testing for this and passing a raw K_FOREVER down, where it got
interpreted as a negative timeout and caused it to fire at the next
tick.

Now that we actually see the original k_timeout_t here, we might as
well check it locally and do the correct thing (that is, nothing) if
asked to schedule a timeout that will never fire.

Fixes #24409

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-22 11:10:17 -07:00
Andrew Boie
618426d6e7 kernel: add Z_STACK_PTR_ALIGN ARCH_STACK_PTR_ALIGN
This operation is formally defined as rounding down a potential
stack pointer value to meet CPU and ABI requirments.

This was previously defined ad-hoc as STACK_ROUND_DOWN().

A new architecture constant ARCH_STACK_PTR_ALIGN is added.
Z_STACK_PTR_ALIGN() is defined in terms of it. This used to
be inconsistently specified as STACK_ALIGN or STACK_PTR_ALIGN;
in the latter case, STACK_ALIGN meant something else, typically
a required alignment for the base of a stack buffer.

STACK_ROUND_UP() only used in practice by Risc-V, delete
elsewhere.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie
1f6f977f05 kernel: centralize new thread priority check
This was being done inconsistently in arch_new_thread(), just
move to the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie
c0df99cc77 kernel: reduce scope of z_new_thread_init()
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().

Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Peter Bigot
aea9d35c4e kernel: fix runtime initialization of k_pipe object
Runtime initialization failed to reset the lock field, causing
problems when the pipe object is located on a stack and passed by
reference to other code.  Lacking an API for initializing a spinlock
by itself use the idiom from _K_PIPE_INITIALIZER().

To simplify maintainability the initialization order is changed
slightly to match the structure field declaration order.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-04-21 11:19:29 +02:00
Andy Ross
38031dd599 kernel: Make the k_heap backend default for k_mem_pool
Legacy code can switch back to the original implementation where it
needs it, but we don't want new code to be unintentionally dependent
on the behavior of the older allocator.  The new one is a better
general purpose choice.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-14 10:05:55 -07:00
Andy Ross
8f0959c7b1 kernel: Add k_mem_pool compatibility layer on top of k_heap
Add a shim layer implementing the legacy k_mem_pool APIs backed by a
k_heap instead of the original implementation.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-14 10:05:55 -07:00
Andy Ross
0dd83b8c2e kernel: Add k_heap synchronized memory allocator
This adds a k_heap data structure, a synchronized wrapper around a
sys_heap memory allocator.  As of this patch, it is an alternative
implementation to k_mem_pool() with somewhat better efficiency and
performance and more conventional (and convenient) behavior.

Note that commit involves some header motion to break dependencies.
The declaration for struct k_spinlock moves to kernel_structs.h, and a
bunch of includes were trimmed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-14 10:05:55 -07:00
Andy Ross
e96ac9061f kernel: Refactor k_mem_pool APIs into a base and derived level
Almost all of the k_mem_pool API is implemented in terms of three
lower level primitives: K_MEM_POOL_DEFINE(), k_mem_pool_alloc() and
k_mem_pool_free_id().  These are themselves implemented on top of the
lower level sys_mem_pool abstraction.

Make this layering explicit by splitting the low level out into its
own files: mempool_sys.c/h.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-14 10:05:55 -07:00
Kumar Gala
43a7d26603 drivers: entropy: replace CONFIG_ENTROPY_NAME with DT macro
Replace CONFIG_ENTROPY_NAME with DT_CHOSEN_ZEPHYR_ENTROPY_LABEL.  We now
set zephyr,entropy in the chosen node of the device tree to the entropy
device.

This allows us to remove CONFIG_ENTROPY_NAME from dts_fixup.h.  Also
remove any other stale ENTROPY related defines in dts_fixup.h files.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-13 09:14:21 -05:00
Anas Nashif
b90fafd6a0 kernel: remove unused offload workqueue option
Those are used only in tests, so remove them from kernel Kconfig and set
them in the tests that use them directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-04-12 18:42:27 -04:00
Andy Ross
914205ca85 kernel/timeout: Add k_uptime_ticks() API
Add a call to get the system tick count as an official API (and
redefine the existing millisecond API in terms of it).  Sophisticated
applications need to be able to count ticks directly, and the newer
timeout API supports that.  Uptime should too, for symmetry.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross
5a5d3daf6f kernel/timeout: Add timeout remaining/expires APIs
Add tick-based (i.e. precision resistant) inspection APIs for kernel
timeouts visible via k_timer, k_delayed work and thread timeouts
(i.e. pended/sleeping threads).  These are each available in
"remaining" and "expires" variants returning time values relative to
current time and system start.  All have system calls where applicable
(i.e. everywhere but k_delayed_work, which is not a userspace API)

The pre-existing millisecond "remaining_get()" predicates for timer
and delayed work remain, but are expressed in terms of the newer
calls.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross
4c7b77a716 kernel/timeout: Add absolute timeout APIs
Add support for "absolute" timeouts, which are expressed relative to
system uptime instead of deltas from current time.  These allow for
more race-resistant code to be written by allowing application code to
do a single timeout computation, once, and then reuse the timeout
value even if the thread wakes up and needs to suspend again later.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross
cfeb07eded kernel/timeout: Enable 64 bit timeout precision
Add a CONFIG_TIMEOUT_64BIT kconfig that, when selected, makes the
k_ticks_t used in timeout computations pervasively 64 bit.  This will
allow much longer timeouts and much faster (i.e. more precise) tick
rates.  It also enables the use of absolute (not delta) timeouts in an
upcoming commit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross
7832738ae9 kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument.  Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created.  This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.

The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.

The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.

Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.

For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided.  When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.

Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions.  These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig.  These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.

k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.

Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate.  Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure.  But k_poll() does not fail
spuriously, so the loop was removed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Oleg Zhurakivskyy
b1e1f64d14 global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()
Replace all occurences of BUILD_ASSERT_MSG() with BUILD_ASSERT()
as a result of merging BUILD_ASSERT() and BUILD_ASSERT_MSG().

Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
2020-03-31 07:18:06 +02:00
Daniel Leung
4e1637b54e kernel: add sys init level for SMP
This adds a sys init level which allows device and sys_init
to be done after SMP initialization, z_smp_init(), when all
cores are up and running.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-03-25 19:07:28 -04:00
Andrew Boie
a4c9190649 kernel: fix oops policy for k_thread_abort()
Don't generate a Z_OOPS() if k_thread_abort() is called on a
thread that isn't running. Just return to the caller instead,
much like how k_thread_join() functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-25 10:23:12 -07:00
Carles Cufi
4b37a8f3a4 Revert "global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()"
This reverts commit 8739517107.

Pull Request #23437 was merged by mistake with an invalid manifest.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-03-19 18:45:13 +01:00
Oleg Zhurakivskyy
8739517107 global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()
Replace all occurences of BUILD_ASSERT_MSG() with BUILD_ASSERT()
as a result of merging BUILD_ASSERT() and BUILD_ASSERT_MSG().

Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
2020-03-19 15:47:53 +01:00
Andrew Boie
28be793cb6 kernel: delete separate logic for priv stacks
This never needed to be put in a separate gperf table.
Privilege mode stacks can be generated by the main
gen_kobject_list.py logic, which we do here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie
2dc2ecfb60 kernel: rename struct _k_object
Private type, internal to the kernel, not directly associated
with any k_object_* APIs. Is the return value of z_object_find().
Rename to struct z_object.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie
2f3a89fa8d kernel: rename _k_object_assignment
Private structure, rename to z_object_assignment

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie
4bad34e749 kernel: rename _k_thread_stack_element
Private data type, prefix with z_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie
f2734ab022 kernel: use a union for kobject data values
Rather than stuffing various values in a uintptr_t based on
type using casts, use a union for this instead.

No functional difference, but the semantics of the data member
are now much clearer to the casual observer since it is now
formally defined by this union.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie
fb1c29475f kernel: zero app shmem bss via SYS_INIT
Doesn't need to be directly in init.c.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 21:40:52 -04:00
Andrew Boie
80a0d9d16b kernel: interrupt/idle stacks/threads as array
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.

There is now a centralized declaration in kernel_internal.h.

On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.

The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.

The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 23:17:36 +02:00
Andrew Boie
322816eada kernel: add k_thread_join()
Callers will go to sleep until the thread exits, either normally
or crashing out.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-13 08:42:43 -04:00
Andrew Boie
5a2619e17a kernel: use z_swap_unlocked in k_thread_abort
z_swap_unlocked() does the same construction of using a
dummy spinlock; just use that and make the code simpler.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-12 10:57:02 -04:00
Andrew Boie
60be4eb653 kernel: remove comment in k_thread_abort()
z_reschedule_unlocked() is a no-op if the caller is
cooperative, because the logic that maintains the ready queue
ensures that the co-op thread is always at the front unless
some special handling is done like in k_yield(), which does
not happen here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-12 10:57:02 -04:00
Joakim Andersson
bd3b4b0caf kernel: Add static threads to k_thread_foreach functions
Add iterating over the static threads for k_thread_foreach and
k_thread_foreach_unlocked iterator functions

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2020-03-12 10:48:29 +02:00
Ioannis Glaropoulos
1c56f87321 kernel: fatal: fix indentation in z_fatal_error
Fix indentation error in z_fatal_error().

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Ioannis Glaropoulos
9d184286d3 kernel: fatal: allow tests to continue upon spurious ISR trigger
We want to be regression-testing the spurious ISR functionality.
Therefore, in z_fatal_error() we need to allow a test to continue
if an error has occured due to a spurious IRQ being triggered.
Only in test mode, wee allow the function to return without an
error. In normal mode the current thread will be aborted.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Ioannis Glaropoulos
49fb5d0812 kernel: fatal: check for esf validity when inspecting nested IRQ
For architectures that support detection of nested interrupts,
we need to check the validity of the exception stack frame,
before we can supply it as a pointer to the function that
evaluates whether we are in a nested interrupt context. This
commits adds the required esf pointer checks in z_fatal_error().

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Andrew Boie
c4fbc08347 kernel: fixup thread monitor locking
The lock in kernel/thread.c was pulling double-duty, protecting
both the thread monitor linked list and also serializing access
to k_thread_suspend/resume functions.

The monitor list now has its own dedicated lock.

The object tracing test has been updated to use k_thread_foreach().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 16:09:24 -04:00
Flavio Ceolin
8ae822c9fa kernel: random: ifdef z_early_boot_rand_get
This function had a to sys_rand_get() even without random source. As
Zephyr is built with linkage garbage collection and this function is
called only if either ENTROPY_HAS_DRIVER or TEST_RANDOM_GENERATOR is
enabled and these options automatically enable a random source.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-03-10 21:12:28 +02:00
Andrew Boie
9f0acd44a4 kernel: add APIs for atomic os on pointers
The existing APIs are insufficient on 64-bit systems as atomic_t
is 32-bits wide.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 10:18:16 -04:00
Andrew Boie
60e0019751 kernel: fix return type for atomic_cas()
In some cases this was a bool, in some cases an integer value
of 1 or 0. Standardize on bool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 10:18:16 -04:00
Andrew Boie
6cf496f324 kernel: use sched lock for k_thread_suspend/resume
This logic should be using the sched_lock and not its own
separate lock for these two functions.

Some simplications were made; z_thread_single_resume and
z_thread_single_suspend were only used in one place, and there was
some redundant logic for whether to reschedule in the suspend case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 09:57:58 -04:00
Flavio Ceolin
fbeaa0a510 kernel: Stack pointer random depends on MULTITHREADING
Don't pretend with have stack randomization without multithreading.
When multithreading is disabled the "main" thread never starts. Zephyr
will run on the stack used for the z_cstart(), which on most
architectures is the interrupt stack.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-03-02 22:49:37 +02:00
Andrew Boie
896e32b414 kernel: remove problematic pend() assertion
This assertion, if built in, allows users threads to crash
the kernel in a critical section by passing a negative timeout
value, creating a DoS attack vector.

Remove this assertion, immediately below it there's a check
which just resets it to 0 anyway.

Fixes: #22999

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-21 08:57:07 -08:00
Peter Bigot
d8146d6c6d kernel: work_q: fix return value in non-error case
A recent patch allowed an error code to be returned even though the
execution path treated it as a non-error condition.  Clear the code
before returning.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-02-20 17:50:05 +02:00
Andy Ross
a2f6826f9c kernel/thread: Don't clobber arch initialization of switch_handle
The recent synchronization work required that the kernel guarantee
switch_handle is non-null, but it did it in a way that works for ARC
and x86_64 but would clobber the work xtensa had already done to
populate that field.

There's no point: just make this an assert, as it's always been the
arch layer's job.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-19 08:29:35 -05:00
Luiz Augusto von Dentz
038d727c18 kernel: work: Return error if timeout cannot be aborted
This is aligned with the documentation which states that an error shall
be returned if the work has been completed:

  '-EINVAL Work item is being processed or has completed its work.'

Though in order to be able to resubmit from the handler itself it needs
to be able to distinct when the work is already completed so instead of
-EINVAL it return -EALREADY when the work is considered to be completed.

Fixes #22803

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2020-02-17 22:37:26 +02:00
Ioannis Glaropoulos
3a3364eef8 kernel: fatal: unlock IRQs in early return points in z_fatal_error
We need to unlock IRQs in early return points of
z_fatal_error() functions; not only at the normal
return point.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-14 20:47:37 +02:00
Andy Ross
7353c7f95d kernel/userspace: Move syscall_frame field to thread struct
The syscall exception frame was stored on the CPU struct during
syscall execution, but that's not right.  System calls might "feel
like" exceptions, but they're actually perfectly normal kernel mode
code and can be preempted and migrated between CPUs at any time.

Put the field on the thread struct.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andy Ross
8153144de0 kernel/fatal: Fatal errors must not be preempted
The code underneath z_fatal_error() (which is usually run in an
exception context, but is not required to be) was running with
interrupts enabled, which is a little surprising.

The only bug present currently is that the CPU ID extracted for
logging is subject to a race (i.e. it's possible but very unlikely
that such a handler might migrate to another CPU after the error is
flagged and log the wrong CPU ID), but in general users with custom
error handlers are likely to be surprised when their dying threads
gets preempted by other code before they can abort.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andy Ross
eefd3daa81 kernel/smp: arch/x86_64: Address race with CPU migration
Use of the _current_cpu pointer cannot be done safely in a preemptible
context.  If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.

Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).

Note that the resulting _current expression now requires locking and
is going to be somewhat slower.  Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.

Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andrew Boie
efc5fe07a2 kernel: overhaul unused stack measurement
The existing stack_analyze APIs had some problems:

1. Not properly namespaced
2. Accepted the stack object as a parameter, yet the stack object
   does not contain the necessary information to get the associated
   buffer region, the thread object is needed for this
3. Caused a crash on certain platforms that do not allow inspection
   of unused stack space for the currently running thread
4. No user mode access
5. Separately passed in thread name

We deprecate these functions and add a new API
k_thread_stack_space_get() which addresses all of these issues.

A helper API log_stack_usage() also added which resembles
STACK_ANALYZE() in functionality.

Fixes: #17852

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-08 10:02:35 +02:00
Anas Nashif
73008b427c tracing: move headers under include/tracing
Move tracing.h to include/tracing/ to align with subsystem reorg.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-02-07 15:58:05 -05:00
Andrew Boie
d1f50122f9 kernel: move timing externs to public header
These arch_timing_ defines get used in certain timer
drivers and need to be in the public include space,
and not the private kernel headers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-06 23:07:37 -05:00
Andy Ross
5737b5c843 kernel/sched: Re-add IPI calls on k_wakeup() and k_thread_priority_set()
These got dropped by an earlier patch, but are required on SMP systems
so synchronously notify other CPUs of changed scheduler state.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-04 17:50:11 -05:00
Andy Ross
c44d566aee kernel/sched: Re-fix SMP wait-for-switch on interrupt exit
This got clobbered by commit adac4cbafa in what I think was a rebase
mistake.  Without it, on SMP systems it's possible to select a new
_current thread and try to return into it before another CPU has
actually finished switching away from it.

Interestingly: the frequency with which this bug got caught once it
was reintroduced was much, much higher than it was when it was fixed
the first time due to the instruction pointer poisoning introduced in
the interrim.  Incompletely saved threads now have deliberately broken
state when assertions are enabled and will panic synchronously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-04 17:50:11 -05:00
Andy Ross
96ccc46e03 kernel/sched: Put k_thread_start() under a single lock
Similar to the suspend refactoring earlier, this really nees to be
done in an atomic block.  There were two confirmable races here,
though it's not completely clear either was being hit in practice:

1. The bit operations in z_mark_thread_as_started() aren't atomic so
   it needs to be protected.

2. The intermediate state in z_ready_thread() could result in a dead
   or suspended thread being added to the ready queue if another
   context tried a simultaneous abort or suspend.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Andy Ross
ed6b4fb21c kernel/sched: Properly synchronize pend()
Kernel wait_q's and the thread pended_on backpointer are scheduler
state and need to be modified under the scheduler lock.  There was one
spot in pend() where they were not.

Also unpack z_remove_thread_from_ready_q() into an unsynchronized
utility so that it can be called by this process in a single lock
block.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Andy Ross
b8ff63e3c7 kernel/sem: Fix SMP race
This had the same race that queue did: you have to be 100% done with
state management before calling z_ready_thread(), because another CPU
can pick up the thread before the return value was set.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Daniel Leung
adac4cbafa sched: smp: fix thread marked dead but still running
Under SMP, when a thread is marked aborting, this thread may still
be running on another CPU. However, if there is only one thread
available to run, this thread may be selected to run again due to
next_up() not checking for the aborting state. Moreover, when
there is no IPI to signal to others k_thread_abort() being called,
the k_thread_abort() target thread is marked dead after a new
thread is selected to run. This causes the original thread calling
k_thread_abort() to mistaken that target thread is no longer
running and returns.

Note that, with working IPI, z_sched_ipi() is called as an ISR
to mark the target thread dead. A new thread is then selected to
run, so that the target thread would not be selected due to it
being dead.

This moves the code to mark thread dead into next_up(), where
the next best thread is selected, and the current thread being
swapped out. z_sched_ipi() now becomes an empty function, and
calls to it are removed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-01-31 11:46:35 -05:00
Anas Nashif
471ffbe77d coverage: do not dump coverage data by default
Only dump data when we are interested in the analysing coverage. By
default just collect the data.

CONFIG_COVERAGE_DUMP is used to control this behaviour.

This will help speed up sanitycheck and will avoid lots of noise in the
log when some tests with coverage enabled failed. Dumping data to
console is also suspected to be one of the reason why qemu hangs in CI.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-30 16:04:03 -05:00
Anas Nashif
7605ac2c4c kernel: thread: fix string for _THREAD_PRESTART
_THREAD_PRESTART means the thread was not started yet and is being
setup, for example this is the case when starting a thread with a
timeout. We do not have a 'restart' thread state.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-29 13:17:19 -08:00
Sebastian Bøe
fdac7b3319 cmake: Add target for generating header files
Before C sources can be compiled any generated header that they
include must be generated. Currently, the target 'offsets_h' happens
to depend directly or indirectly on all generated headers.

This means that to compile safely, one can simply depend on
'offsets_h'. But this is coincidental and might not be true in the
future.

To be able to safely depend on a target that represents all generated
headers being ready we introduce the target
'zephyr_generated_headers'.

Any third-party build scripts can now safely depend on
'zephyr_generated_headers' and be protected from any internal changes
to the build system, like the removal of offsets_h.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2020-01-29 11:44:57 -06:00
Andrew Boie
6f654bbafd mempool: use k_malloc heap for ISR allocations
Fixes an issue where calling z_thread_malloc() would
borrow the resource pool of whatever thread happened
to be interrupted at the time.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-24 09:27:59 -08:00
Krzysztof Chruscinski
a8b5a2e65e kernel: device: Add const qualifier to device_config
Device config structure is placed in rom section but there was
no const prefix used. Lack of prefix suggested that structure
is in ram (ram_report is also fooled). Added const prefix to
explicitly inform that it goes to rom.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2020-01-22 06:32:36 -06:00
Andy Ross
4c2fc2aed7 kernel/queue: Fix SMP race
Calling z_ready_thread() means the thread is now ready and can wake up
at any moment on another CPU.  But we weren't finished setting the
return value!  So the other side could wake up with a spurious "error"
condition if it ran too soon.  Note that on systems with a working
IPI, that wakeup can happen much faster than you might think.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andy Ross
3235451880 kernel/swap: Add SMP "wait for switch" synchronization
On SMP, there is an inherent race when swapping: the old thread adds
itself back to the run queue before calling into the arch layer to do
the context switch.  The former is properly synchronized under the
scheduler lock, and the later operates with interrupts locally
disabled.  But until somewhere in the middle of arch_switch(), the old
thread (that is in the run queue!) does not have complete saved state
that can be restored.

So it's possible for another CPU to grab a thread before it is saved
and try to restore its unsaved register contents (which are garbage --
typically whatever state it had at the last interrupt).

Fix this by leveraging the "swapped_from" pointer already passed to
arch_switch() as a synchronization primitive.  When the switch
implementation writes the new handle value, we know the switch is
complete.  Then we can wait for that in z_swap() and at interrupt
exit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andy Ross
e06ba702d5 kernel/sched: Address thread abort termination delay issue on SMP
It's possible for a thread to abort itself simultaneously with an
external abort from another thread.  In fact in our test suite this is
a common thing, as ztest will abort its own spawend threads at the end
of a test, as they tend to be exiting on their own.

When that happens, the thread marks itself DEAD and does all its
scheduler bookeeping, but it is STILL RUNNING on its own stack until
it makes its way to its final swap.  The external context would see
that "dead" metadata and return from k_thread_abort(), allowing the
next test to reuse and spawn the same thread struct while the old
context was still running.  Obviously that's bad.

Unfortunately, this is impossible to address completely without
modifying every SMP architecture to add a API-visible hook to every
swap that signals completion.  In practice the best we can do is add a
delay.  But note the optimization: almost always, the scheduler IPI
catches the running thread and kills it from interrupt context
(i.e. on a different stack).  When that happens, we know that the
interrupted thread will never be resumed (because it's dead) and can
elide the delay.  We only pay the cost when we actually detect a race.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andy Ross
60247ca149 kernel/sched: Correct IPI usage
These two spots were calling z_sched_ipi() (the IPI handler run under
the ISR, which is a noop here because obviously the current thread
isn't DEAD) and not arch_sched_ipi() (which triggers an IPI on other
CPUs to inform them of scheduling state changes), presumably because
of a typo.

Apparently we don't have tests for k_wakeup() and
k_thread_priority_set() that are sensitive to latency in SMP
contexts...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andy Ross
86430d8d46 kernel: arch: Clarify output switch handle requirements in arch_switch
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme.  As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.

Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Ulf Magnusson
40b49e22ea kernel: kconfig: Make SCHED_IPI_SUPPORTED invisible
Toggling this symbol probably doesn't make sense, because the
architecture is already known when Kconfig runs.

SCHED_IPI_SUPPORTED is enabled through being selected by the ARC_CONNECT
(maybe that one shouldn't be configurable either) and X86_64 symbols.

Note that it's not possible to disable the symbol when it's being
selected, so trying to turn it off on e.g. X86_64 won't work either.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-01-20 18:38:10 -05:00
Anas Nashif
756d8b03e2 kernel: queue: runtime error handling
Runtime error handling for k_queue_append_list and k_queue_merge_slist.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
1ed67d1d51 kernel: stack: error handling
Add runtime error checking for both k_stack_push and k_stack_cleanup.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
11b9365542 kernel: msgq: error handling
Add runtime error handling for k_msgq_cleanup. We return 0 on success
now and -EAGAIN when cleanup is not possible.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
dfc2bbcd3c kernel: mem_slab: error handling
Add runtime error checking for k_mem_slab_init and replace asserts with
returning error codes.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
154af912e8 kernel: work_q: error handling
When trying to cancel a NULL work queue return -EAGAIN.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
361a84d07f kernel: pipe: runtime error checking
Add runtime error checking to k_pipe_cleanup and k_pipe_get and remove
asserts.
Adapted test which was expecting a fault to handle errors instead.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
5076a83ef5 kernel: semaphore: optimize code
Remove static helper functions used only once and integrate them into
calling functions.
In k_sem_take, return at the end.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
928af3ce09 kernel: semaphore: k_sem_init error checking
Check for errors at runtime and stop depending on ASSERTs.
This changes the API for
- k_sem_init

k_sem_init now returns -EINVAL on invalid data.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Anas Nashif
86bb2d06d7 kernel: mutex: add error checking
k_mutex_unlock will now perform error checking and return on failures.

If the current thread does not own the mutex, we will now return -EPERM.
In the unlikely situation where we own a lock and the lock count is
zero, we assert. This is considered an undefined bahviour and should not
happen.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Andrew Boie
a594ca7c8f kernel: cleanup and formally define CPU start fn
The "key" parameter is legacy, remove it.

Add a typedef for the expected function pointer type.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Anas Nashif
0ad67650f2 tracing: better positioning of tracing points
Improve positioning of tracing calls. Avoid multiple calls and missing
events because of complex logix. Trace the event where things happen
really.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-09 11:21:19 -05:00
Anas Nashif
1530819e12 tracing: remove duplicate tracing of thread creation
This is already being called in z_setup_new_thread.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-09 11:21:19 -05:00
Andy Ross
8bdabcc46b kernel/sched: Move thread suspend and abort under the scheduler lock
Historically, these routines were placed in thread.c and would use the
scheduler via exported, synchronized functions (e.g. "remove from
ready queue").  But those steps were very fine grained, and there were
races where the thread could be seen by other contexts (in particular
under SMP) in an intermediate state.  It's not completely clear to me
that any of these were fatal bugs, but it's very hard to prove they
weren't.

At best, this is fragile.  Move the z_thread_single_suspend/abort()
functions into the scheduler and do the scheduler logic in a single
critical section.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-08 14:21:10 +01:00
Steven Wang
9b909f2d69 kernel/mutex: code improvement
Line "new_prio = mutex->owner_orig_prio" is unnecessary. So
remove it.

Signed-off-by: Steven Wang <steven.l.wang@linux.intel.com>
2020-01-07 17:13:37 +01:00
Peter Bigot
74ef395332 kernel: move test of kernel startup state to more visible location
The original implementation left this function hidden in init.h which
prevented it from showing up in documentation.  Move it to kernel.h,
and document it consistent with the other functions that allow caller
customization based on context.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-01-06 13:55:31 -05:00