Commit Graph

2015 Commits

Author SHA1 Message Date
Benoit Leforestier
9915b4ec4e C++: Fix compilation error "invalid conversion"
When some header are included into C++ source file, this kind of
compilations errors are generated:
error: invalid conversion from 'void*'
	to 'u32_t*' {aka 'unsigned int*'} [-fpermissive]

Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
2019-05-03 14:27:07 -04:00
Ioannis Glaropoulos
873dd10ea4 kernel: mem_domain: update name/doc of API function for partition add
Update the name of mem-domain API function to add a partition
so that it complies with the 'z_' prefix convention. Correct
the function documentation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-05-02 11:37:38 -04:00
Andrew Boie
afda764ee6 kernel: increase workq sizes if COVERAGE=y
The defaults are too small if coverage is enabled.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-05-02 07:23:35 -04:00
Flavio Ceolin
2e0095a99c security: kernel: Fix STACK_POINTER_RANDOM dependency
STACK_POINTER_RANDOM depends on a random generator, this can be either a
non-random generator (used for testing purpose) or a real random
generator. Make this dependency explicitly in Kconfig to avoid linking
problems.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-05-01 08:34:13 -07:00
Andrew Boie
09dc929d41 userspace: fix copy from user locking
We don't actually need spinlocks here.

For user_copy(), we are checking that the pointer/size passed in
from user mode represents an area that the thread can read or
write to. Then we do a memcpy into the kernel-side buffer,
which is used from then on. It's OK if another thread scribbles
on the buffer contents during the copy, as we have not yet
begun any examination of its contents yet.

For the z_user_string*_copy() functions, it's also possible
that another thread could scribble on the string contents,
but we do no analysis of the string other than to establish
a length. We just need to ensure that when these functions
exit, the copied string is NULL terminated.

For SMP, the spinlocks are removed as they will not prevent a
thread running on another CPU from changing the buffer/string
contents, we just need to safely deal with that possibility.

For UP, the locks do prevent another thread from stepping
in, but it's better to just safely deal with it rather than
affect the interrupt latency of the system.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-18 17:13:08 -04:00
Flavio Ceolin
4f99a38b06 arch: all: Remove not used struct _caller_saved
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Flavio Ceolin
d61c679d43 arch: all: Remove legacy code
The struct _kernel_ach exists only because ARC' s port needed it, in
all other ports this was defined as an empty struct. Turns out that
this struct is not required even for ARC anymore, this is a legacy
code from nanokernel time.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Wentong Wu
8646a8e4f5 tests/kernel/mem_protect/stackprot: stack size adjust
revert commit 3e255e968 which is to adjust stack size
on qemu_x86 platform for coverage test, but break other
platform's CI test.

Fixes: #15379.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-04-12 10:06:43 -04:00
Wentong Wu
3e255e968a tests: adjust stack size for qemu_x86's coverage test
for SDK 0.10.0, it consumes more stack size when coverage
enabled, so adjust stack size to fix stack overflow issue.

Fixes: #15206.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-04-11 17:59:39 -04:00
Anas Nashif
3ae52624ff license: cleanup: add SPDX Apache-2.0 license identifier
Update the files which contain no license information with the
'Apache-2.0' SPDX license identifier.  Many source files in the tree are
missing licensing information, which makes it harder for compliance
tools to determine the correct license.

By default all files without license information are under the default
license of Zephyr, which is Apache version 2.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-04-07 08:45:22 -04:00
Andrew Boie
9f04c7411d kernel: enforce usage of CONFIG_TEST_USERSPACE
If a test tries to create a user thread, and the platform
suppors user mode, and CONFIG_TEST_USERSPACE has not been
enabled, fail an assertion.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-06 14:30:42 -04:00
Andrew Boie
4e5c093e66 kernel: demote K_THREAD_STACK_BUFFER() to private
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.

As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.

The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269

Fixes: #14766

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-05 16:10:02 -04:00
Wentong Wu
b991962a2e tests: adjust stack size for qemu_x86 and mps2_an385's coverage test
for SDK 0.10.0, it consumes more stack size when coverage enabled
on qemu_x86 and mps2_an385 platform, adjust stack size for most of
the test cases, otherwise there will be stack overflow.

Fixes: #14500.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-04-04 08:23:13 -04:00
Patrik Flykt
7c0a245d32 arch: Rename reserved function names
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Andrew Boie
f0835674a3 lib: os: add sys_mutex data type
For systems without userspace enabled, these work the same
as a k_mutex.

For systems with userspace, the sys_mutex may exist in user
memory. It is still tracked as a kernel object, but has an
underlying k_mutex that is looked up in the kernel object
table.

Future enhancements will optimize sys_mutex to not require
syscalls for uncontended sys_mutexes, using atomic ops
instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-03 13:47:45 -04:00
Andrew Boie
1dc6612d50 userspace: do not track net_context as a kobject
The socket APIs no longer deal with direct net context
pointers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-03 13:47:45 -04:00
Andrew Boie
526807c33b userspace: add const qualifiers to user copy fns
The source data is never modified.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-29 22:21:16 -04:00
Andrew Boie
ae0d1b2b79 kernel: sched: move stack sentinel check earlier
Checking the stack sentinel may abort the current thread,
make this check before we determine what the next thread
to run is.

Fixes: #15037

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-29 22:13:40 -04:00
Patrik Flykt
21358baa72 all: Update unsigend 'U' suffix due to multiplication
As the multiplication rule is updated, new unsigned suffixes
are added in the code.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Patrik Flykt
24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Daniel Leung
416d94cd30 kernel/mutex: remove object monitoring empty loop macros
There are some remaining code from object monitoring which simply
expands to empty loop macros. Remove them as they are not
functional anyway.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-03-28 08:55:12 -05:00
David Brown
22029275d3 kernel: Clarify warning about no multithreading
Clarify the warning in the help for CONFIG_MULTITHREADING to make it
clear that many things will break if this is set to 'n'.

Signed-off-by: David Brown <david.brown@linaro.org>
2019-03-28 09:49:59 -04:00
Flavio Ceolin
2df02cc8db kernel: Make if/iteration evaluate boolean operands
Controlling expression of if and iteration statements must have a
boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Flavio Ceolin
625ac2e79f spinlock: Change function signature to return bool
Functions z_spin_lock_valid and z_spin_unlock_valid are essentially
boolean functions, just change their signature to return a bool instead
of an integer.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Anas Nashif
42f4538e40 kernel: do not use k_busy_wait when on single thread
k_busy_wait() does not work when multithreading is disabled, so do not
try to wait during boot.

Fixes #14454

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-03-26 20:09:07 -04:00
Flavio Ceolin
abf27d57a3 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Flavio Ceolin
2ecc7cfa55 kernel: Make _is_thread_prevented_from_running return a bool
This function was returning an essentially boolean value. Just changing
the signature to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Flavio Ceolin
a996203739 kernel: Use macro BIT for shift operations
BIT macro uses an unsigned int avoiding implementation-defiend behavior
when shifting signed types.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Piotr Mienkowski
a3082e49a1 power: modify HAS_STATE_SLEEP_ Kconfig options
Add SYS_POWER_ prefix to HAS_STATE_SLEEP_, HAS_STATE_DEEP_SLEEP_
options to align them with names of power states they control.
Following is a detailed list of string replacements used:
s/HAS_STATE_SLEEP_(\d)/HAS_SYS_POWER_STATE_SLEEP_$1/
s/HAS_STATE_DEEP_SLEEP_(\d)/HAS_SYS_POWER_STATE_DEEP_SLEEP_$1/

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-03-26 13:27:55 -04:00
Piotr Mienkowski
17b08ceca5 power: clean up system power managment function names
This commit cleans up names of system power management functions by
assuring that:
- all functions start with 'sys_pm_' prefix
- API functions which should not be exposed to the user start with '_'
- name of the function hints at its purpose

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-03-26 13:27:55 -04:00
Piotr Mienkowski
204311d004 power: rename Low Power States to Sleep States
There exists SoCs, e.g. STM32L4, where one of the low power modes
reduces CPU frequency and supply voltage but does not stop the CPU. Such
power modes are currently not supported by Zephyr.

To facilitate adding support for such class of power modes in the future
and to ensure the naming convention makes it clear that the currently
supported power modes stop the CPU this commit renames Low Power States
to Slep States and updates the documentation.

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-03-26 13:27:55 -04:00
Andy Ross
4521e0c111 kernel/sched: Mark sleeping threads suspended
On SMP, there was a bug where the logic that re-adds _current to the
run queue at swap time would accidentally reschedule threads that had
just gone to sleep, because the is_thread_prevented_from_running()
predicate only tests for threads that are "suspended" or "pending" and
not sleeping.

Overload _THREAD_SUSPENDED to indicate "sleeping" also.  Simple fix
for an immediate bug, though long term we really want to unify all the
blocked conditions to prevent this kind of state bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-23 19:28:15 -04:00
Andy Ross
e59d19628d kernel/sched: Rework prio validity assertion
This is throwing errors in static analysis, complaining that comparing
that a prior is higher and lower is impossible.  That is wrong per my
eyes (I swear I think it might be cueing off the names of the
functions, which invert "higher" and "lower" to match our reversed
priority numbers).

But frankly this was never a very readable macro to begin with.
Refactor to put the bounds into the term, so the static analyzer can
prove it locally, and add a build assertion to catch any errors (there
are none currently) where the low<->high priority range is invalid.

Long term, we should probably remove this macro, it doesn't provide
much value.  But removing it in response to a static analysis failure
is... not very responsible as a development practice.

Fixes #14816
Fixes #14820

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-23 09:53:55 -05:00
Andrew Boie
f4631d5b43 kernel: amend comment in k_thread_create handler
This behavior is expected and not of any concern.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andrew Boie
d0035f9779 kernel: fix stack size check in k_thread_create
The pointer arithmetic used didn't account for ARC
supervisor mode stacks, which are allocated at the
end of the stack object. Use the new macro to know
exactly how much space is reserved.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andy Ross
3dea408405 kernel/sched: Flag DEAD on correct thread in cross-CPU abort
Daniel Leung caught a good one: In the (SMP) case where we were
aborting a thread that was not currently scheduled, we were flagging
the DEAD state on _current and not the thread we were aborting!  This
wasn't as fatal as it seems, as the thread that called z_sched_abort()
would effectively go on living (as a zombie?) in a state where it
would always be preempted, but would otherwise remain scheduleable.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-19 13:39:24 -05:00
Andrew Boie
7ea211256e userspace: properly namespace handler functions
Now prefixed with z_hdlr_ instead of just hdlr_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Andrew Boie
50be938be5 userspace: renamespace some internal macros
These private macros are now all prefixed with Z_.

Fixes: #14447

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Andrew Boie
b3eb510f5c kernel: fix atomic ops in user mode on some arches
Most CPUs have instructions like LOCK, LDREX/STREX, etc which
allows for atomic operations without locking interrupts that
can be invoked from user mode without complication. They typically
use compiler builtin atomic operations, or custom assembly
to implement them.

However, some CPUs may lack these kinds of instructions, such
as Cortex-M0 or some ARC. They use these C-based atomic
operation implementations instead. Unfortunately these require
grabbing a spinlock to ensure proper concurrency with other
threads and ISRs. Hence, they will trigger an exception when
called from user mode.

For these platforms, which support user mode but not atomic
operation instructions, the atomic API has been exposed as
system calls.

Some of the implementations in atomic_c.c which can be instead
expressed in terms of other atomic operations have been removed.

The kernel test of atomic operations now runs in user mode to
prove that this works.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:18:00 -04:00
Andy Ross
722aeead91 kernel/sched: Nonatomic swap workaround update for qemu behavior
The workaround for nonatomic swap had yet another edge case: it would
save off the _current pointer when pending a thread so that the next
time slice interrupt could test it to see if the swap had actually
happened before assuming that _current could be rescheduled (if it
just pended itself, that's impossible).  Then it would clear the
pending_current pointer so future interrupts wouldn't be confused.

BUT: it turns out that qemu, when faced with really rapid timer rates
that exceed its (host-based) timing accuracy, is perfectly willing to
"stack up" timer interrupts such the one goes pending before the
previous one is finished executing.  In that case, we can enter the
SECOND timer interrupt, to try timeslicing a SECOND time, STILL before
the PendSV exception has run to actually effect the context switch.
Except this time pending_current has been cleared and we try to
reschedule the pended _current thread incorrectly.  In theory real
hardware could do this too, though it would involve absolutely crazy
interrupt latency problems.

Work around this by moving the clear to the thread itself, immediately
after it wakes up from the pend call it retakes a lock and clears
pending_current if it still matches _current.  That is not a perfect
fix: there remains a 2-3 instruction race at that moment where we
return from pend and before we can lock interrupts again where a timer
interrupt will see an incorrect pointer.  But I hammered at this and
couldn't make qemu do that (i.e. return from a timer interrupt but
flag a new one in just a cycle or two).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-15 05:50:43 +01:00
Ramakrishna Pallala
e1639b5345 device: Extend device_set_power_state API to support async requests
The existing device_set_power_state() API works only in synchronous
mode and this is not desirable for devices(ex: Gyro) which take
longer time (few 100 mSec) to suspend/resume.

To support async mode, a new callback argument is added to the API.
The device drivers can asynchronously suspend/resume and call the
callback function upon completion of the async request.

This commit adds the missing callback parameter to all the drivers
to make it compliant with the new API.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2019-03-14 14:26:15 +01:00
Ioannis Glaropoulos
cac20e91d8 kernel: userspace: correct documentation for Z_SYSCALL_MEMORY_ macros
Corrections in the documentation of arguments in
Z_SYSCALL_MEMORY, Z_SYSCALL_MEMORY_READ, and
Z_SYSCALL_MEMORY_WRITE macros.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Ioannis Glaropoulos
c686dd5064 kernel: enhance documentation of z_arch_buffer_validate
This commit enhances the documentation of z_arch_buffer_validate
describing the cases where the validation is performed
successfully, as well as the cases where the result is
undefined.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Andy Ross
ea1c99b11b kernel/sched: Fix k_yield() in SMP
This was always doing a remove/add of the _current thread to the run
queue, which is wrong because in SMP _current isn't in the queue to
remove.  But it went undetected until the recent dlist changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
8c1bdda33c kernel/sched: Fix spinlock validation glitch in SMP
In SMP, we are setting the _current pointer while holding the
scheduler spinlock locally, which means that when we try to release it
the validation layer (not the spinlock per se) will scream at us
because the thread that took the lock doesn't match the one releasing
it.

Special case this when validation is enabled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
f37e0c6e4d kernel/spinlock: Fix race in spinlock validation
The k_spin_lock() validation was setting the new owner of the spinlock
BEFORE the actual lock was taken, so it could race against other
processors trying the same thing.  Split the modification step out
into a separate function that can be called after we affirmatively
have the lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
b18685bcf1 kernel/sched: Clean up tracing hooks
The tracing fixes in commit e87193896a ("subsys: debug: tracing: Fix
thread tracing") were... not a readability win.  The point appears to
have been to put a tracing hook immediately before and after the
assignment to the _current pointer.  So do that in an abstracted
function and clean up _get_next_switch_handle() (which is a subtle and
important function already polluted with some unavoidable preprocessor
testing!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
42ed12a387 kernel/sched: arch/x86_64: Support synchronous k_thread_abort() in SMP
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue.  This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.

On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur.  This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!

On non-SMP architectures this patch changes no code paths at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
6ed59bc442 kernel/smp: Fix bitrot with the way the SMP "start flag" works
Tickless timers mean that k_busy_wait() won't work until after the
timer driver is initialized, which is very early but not as early as
SMP.  No need for it, just spin.

Also the original code used a stack variable for the start flag, which
racily presumed that _arch_start_cpu() would comes back synchronously
with the other CPU fired up and running right now.  The cleaned up smp
bringup API on x86_64 isn't so perky, so it exposed the bug.  The flag
just needs to be static.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
aed8288196 kernel/sched: Handle aboring _current correctly in SMP
In SMP, _current is not "queued".  (The run queue only stores
unscheduled threads because we can't rely on the head of the list
being _current).  We weren't updating the cache choice, which would
flag swap_ok, so calling k_thread_abort(_current) (for example, when a
thread exits from its entry function) would try to switch back into
the thread and then run off the end of the function.

Amusingly this was more benign than you'd think.  Stumbled on it by
accident.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
0f8bee9c07 kernel/smp: Warning cleanup
When CONFIG_SMP is enabled but CONFIG_MP_NUM_CPUS is 1 (which is a
legal configuration, though a weird one) this static function ends up
being defined but unused, producing a compiler warning.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
c0183fdedd kernel/work_q: Fix locking across multiple queues
There was a detected user error in the code where racing insertions of
k_delayed_work items into different queues would be detected and
flagged as an error (honestly I don't see much value there -- Zephyr
doesn't as a general rule protect against errors like this, and
work_q's are inherently kernel things that don't require
userspace-style checking).

This got broken with spinlockification, where each work_q object got
its own lock, so the single lock wouldn't protect against the other
insert function any more.  As it happens, that was needless.  The core
synchronization on a work_q is in the internal k_queue object anyway
-- the lock in this file was only ever used for (very fast,
noncontending) delayed work insertion.  So go back to a global lock to
preserve the original behavior.

Fixes #14104

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-12 18:37:41 +01:00
Pawel Dunaj
b87920bf3c kernel: Make heap smallest object size configurable
Allow application to chose the size of the smallest object taken from
the heap.

Signed-off-by: Pawel Dunaj <pawel.dunaj@nordicsemi.no>
2019-03-12 11:56:31 +01:00
Patrik Flykt
4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Patrik Flykt
cf2d57952e kernel/sched: Rename scheduler spinlock
Rename scheduler spinlock sched_lock to sched_spinlock as it will
collide with the cleanup of the reserved function name _sched_lock(),
which will also be called sched_lock().

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andrew Boie
576ebf4991 kernel: add config for Spectre V1 mitigation
This is off by default, but may be selected by the arch
configuration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-11 09:54:04 -07:00
Ioannis Glaropoulos
d69c2f8129 kernel: documentatation for _setup_new_thread()
Add a note in the documentatation of _setup_new_thread()
function stating that the caller is responsible for
providing a size argument that corresponds to the availabe
thread stack area.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-09 11:57:24 -08:00
Ioannis Glaropoulos
edc9e4d245 arch: arm: userspace: Force arch-specific user local data reservation
This commit forces architecture-specific implementation for
initializing the are for user mode local thread data. This
has been enforced already for ARC. We now do the same for ARM.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-09 11:57:24 -08:00
Flavio Ceolin
d9876be30c kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-05 14:58:58 -08:00
Andrew Boie
e686aefe09 mbedtls: provide user mode access
The mbedtls library has some globals which results in faults
when user mode tries to access them.

Instantiate a memory partition for mbedtls's globals.
The linker will place all globals found by building this
library into this partition.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-05 08:27:20 -05:00
Andrew Boie
62fad96802 userspace: zero app memory bss earlier
Some init tasks may use some bss app memory areas and
expect them to be zeroed out. Do this much earlier
in the boot process, before any of the init tasks
run.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-05 08:27:20 -05:00
Andrew Boie
7707060959 userspace: get rid of app section placeholders
We used to leave byte-long placeholder symbols to ensure
that empty application memory sections did not cause
build errors that were very difficult to understand.

Now we use some relatively portable inline assembly to
generate a symbol, but don't take up any extra space.

The malloc and libc partitions are now only instantiated
if there is some data to put in them.

Fixes: #13923

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-04 08:05:16 -08:00
Andrew Boie
475d279382 userspace: clarify memory domain assertions
Some text added to help explain what is going on.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-04 08:05:16 -08:00
Andrew Boie
6dc3fd8e50 userspace: fix x86 issue with adding partitions
On x86, if a supervisor thread belonging to a memory domain
adds a new partition to that domain, subsequent context switches
to another thread in the same domain, or dropping itself to user
mode, does not have the correct setup in the page tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-03 23:44:13 -05:00
Charles E. Youse
0ad4022e51 kernel/timeout: fix k_timer_remaining_get() when tickless
In some circumstances (e.g., a tickless kernel), k_timer_remaining_get()
would not account for time passed that didn't involve clock interrupts.
This adds a simple fix for that, and adds a test case.  In addition, the
return value of k_timer_remaining_get() is clamped at 0 in the case of
overdue timers and the API description is adjusted to reflect this.

Fixes: #13353

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-03-01 14:53:33 -08:00
Andrew Boie
d3c89fea4f kernel: move CONFIG_RETPOLINE definition
Retpolines were never completely implemented, even on x86.
Move this particular Kconfig to only concern itself with
the assembly code, and don't default it on ever since we
prefer SSBD instead.

We can restore the common kernel-wide CONFIG_RETPOLINE once
we have an end-to-end implementation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-01 12:35:04 -08:00
Andy Ross
dff6b71450 kernel/sched: More nonatomic swap fixes
Nonatomic swap strikes again.  These issues are all longstanding, but
were unmasked by the dlist work in commit d40b8ce1fb ("sys: dlist:
Add sys_dnode_is_linked") where list node pointers become nulls on
removal.

The previous fix was for a specific case where a timeslicing interrupt
would try to slice out the "wrong" current thread because the thread
has "just" pended itself.  That was incomplete, because the parallel
code in k_sleep() didn't flag itself the same way.

And beyond that, it turns out to be basically impossible (now that I'm
thinking about it correctly) to prevent interrupt code from calling
into the scheduler to suspend a "just pended but not quite" current
and/or preempt away to another thread.  In any of these cases, the
scheduler modifications to the state bits remain correct but the queue
nodes may be corrupt because the thread was already removed from the
ready queue.  So we have to test and correct this at the lowest level,
where a thread is being removed from a priq: check that it's (1) the
ready queue and not a waitq, (2) the current thread, and (3) already
marked suspended and thus not in the queue.

There are lots of existing issues filed in the last few months all
pointing to odd instability on ARM platforms.  I'm reasonably certain
this is the root cause for most or all of them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-27 12:07:34 -08:00
Piotr Mienkowski
f04a4c9deb power: rename CPU_LPS_n power states
CPU_LPS_n name used to indicate a low power state is cryptic and
incorrect. The low power states act on the whole SoC and not exclusively
on the CPU. This patch renames CPU_LPS_n states to LOW_POWER_n. Also
HAS_ pattern for Kconfig options is used in favor of a non standard
_SUPPORTED. Naming of deep sleep states was adjusted accordingly.

Following is a detailed list of string replacements used:
s/SYS_POWER_STATE_CPU_LPS_(\d)_SUPPORTED/HAS_STATE_LOW_POWER_$1/
s/SYS_POWER_STATE_CPU_LPS_(\d)/SYS_POWER_STATE_LOW_POWER_$1/
s/SYS_POWER_STATE_DEEP_SLEEP_(\d)_SUPPORTED/HAS_STATE_DEEP_SLEEP_$1/

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-02-26 02:30:13 +01:00
Piotr Mienkowski
c75187587b power: simplify SYS_POWER_*_SUPPORTED Kconfig options
This commit removes dependency on SYS_POWER_LOW_POWER_STATES_SUPPORTED,
SYS_POWER_DEEP_SLEEP_STATES_SUPPORTED Kconfig options. Power management
SYS_POWER_LOW_POWER_STATES, SYS_POWER_DEEP_SLEEP_STATES options depend
now directly on specific power states supported by the given SoC. This
simplifies maintenance of SoC Kconfig files.

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-02-26 02:30:13 +01:00
Andrew Boie
f5951cd88f kernel: syscall_handler: get rid of stdarg
We can just implement this as a macro and not needlessly
run afoul of MISRC-C rule 17.1

Fixes: #10012

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-25 13:42:03 -08:00
Andrew Boie
4ce652e4b2 userspace: remove APP_SHARED_MEM Kconfig
This is an integral part of userspace and cannot be used
on its own. Fold into the main userspace configuration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-23 07:43:55 -05:00
Andrew Boie
01100eadb8 kernel: add stack canary to libc partition
User mode needs to be able to read this value in
compiler generated function prologues/epilogues.

Special handling in init.c for arches that use
_data_copy. This happens before _Cstart() gets
called. We need to make sure that the compiler
stack canary checks in _data_copy itself do not
fail.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-22 18:50:43 -05:00
Andrew Boie
17ce822ed9 app_shmem: create generic libc partition
We need a generic name for the partition containing
essential C library globals. We're going to need to
add the stack canary guard to this area so user mode
can read it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-22 18:50:43 -05:00
Aurelien Jarno
992f29a1bc arch: make __ramfunc support transparent
Instead of having to enable ramfunc support manually, just make it
transparently available to users, keeping the MPU region disabled if not
used to not waste a MPU region. This however wastes 24 bytes of code
area when the MPU is disabled and 48 bytes when it is enabled, and
probably a dozen of CPU cycles during boot. I believe it is something
acceptable.

Note that when XIP is used, code is already in RAM, so the __ramfunc
keyword does nothing, but does not generate an error.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2019-02-22 11:36:50 -08:00
Aurelien Jarno
eb097bd095 arch: arm: mpu: get the __ramfunc region size from the linker
The linker file defines the __ramfunc_ram_size symbols to get the size
of the __ramfunc_ram section. Use that instead of computing the value at
runtime from the start and end symbols. This saves 16 bytes of code with
CONFIG_RAM_FUNCTION=y.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2019-02-22 11:36:50 -08:00
qianfan Zhao
e1cc657941 arm: Placing the functions which holds __ramfunc into '.ramfunc'
Using __ramfunc to places a function in RAM instead of Flash.
Code that for example reprograms flash at runtime can't execute
from flash, in that case must placing code into RAM.

This commit create a new section named '.ramfunc' in link scripts,
all functions has __ramfunc keyword saved in thats sections and
will load from flash to sram after the system booted.

Fixes: #10253

Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
2019-02-22 11:36:50 -08:00
Piotr Zięcik
c45961daae power: Rework OS <-> Application interface
This commit simplifies OS <-> Application interface controlling power
management. In the previous approach application-based PM required
overriding sys_suspend() and sys_resume() functions. As these functions
actually implemented power state change, in such case application
basically had to provide own implementation of all PM-related stuff,
which was not portable and hard to maintain.

This commit changes this scheme: The sys_suspend() and sys_resume()
are now system functions while the application could either use
built-in power management policies or provide its own. All details
of power mode switching are now handled by the OS.

Also, this commit cleans up the Kconfig options related to system-level
power management grouping them under common CONFIG_SYS_PM_ prefix.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-19 13:25:36 -05:00
Carlos Stuart
75f77db432 include: misc: util.h: Rename min/max to MIN/MAX
There are issues using lowercase min and max macros when compiling a C++
application with a third-party toolchain such as GNU ARM Embedded when
using some STL headers i.e. <chrono>.

This is because there are actual C++ functions called min and max
defined in some of the STL headers and these macros interfere with them.
By changing the macros to UPPERCASE, which is consistent with almost all
other pre-processor macros this naming conflict is avoided.

All files that use these macros have been updated.

Signed-off-by: Carlos Stuart <carlosstuart1970@gmail.com>
2019-02-14 22:16:03 -05:00
Andy Ross
86380483da kernel/work_q: Fix block-in-spinlock bug
Work queues are implemented in terms of k_queue objects which provide
their own synchronization.  In particular insertion is potentially
blocking and always acts as a reschedule point, which means that it
must not be called with spinlocks held.

Release the lock first, and do a little cleanup of the resulting
k_delayed_work_submit_to_queue() logic.

Fixes #13411

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-14 19:45:20 -05:00
Andrew Boie
2cfeba8507 x86: implement interrupt stack trampoline
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.

Adjustments to page tables are now as follows:

- Any adjustments for stack memory access now are always done
  to the user page tables

- Any adjustments for memory domains are now always done to
  the user page tables

- With KPTI, resetting a page now clears the present bit

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Ioannis Glaropoulos
228702e6e1 kernel: minor syntax fix in Kconfig
Minor style (syntax) fix in the help text of symbol
config EXECUTION_BENCHMARKING.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-02-12 08:29:33 -06:00
Piotr Zięcik
9cc63e07e4 power: Fix naming of Kconfig options controlling deep sleep states
This commit changes the names of SYS_POWER_DEEP_SLEEP* Kconfig
options in order to match SYS_POWER_LOW_POWER_STATE* naming
scheme.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-12 07:46:32 -05:00
Piotr Zięcik
7a49356c77 power: Fix naming of Kconfig options controlling low power states
The SYS_POWER_LOW_POWER_STATE_SUPPORTED and SYS_POWER_LOW_POWER_STATE
suggests one low power state but these options control multiple
low power state. This commit uses plural in the names to indicate
that.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-12 07:46:32 -05:00
Andy Ross
1202810119 kernel/sched: _thread_priority_set needs to be sched_lock aware
This API doesn't use the normal thread priority comparison itself, so
doesn't get the magic that thread_base.prio provides.  If called when
another thread should be run, this would preempt the current thread
always, even if the scheduler lock was taken.

That was benign until recent spinlockifiation exposed it: a mutex in
the philosophers test run in preempt_only mode would swap away while
holding a spinlock (which used to work with irq locks) and fail later
with a "recursive" spinlock assert.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
8a3d57b6cc kernel/userspace: Spinlockification
This port is a little different.  Most subsystem synchronization uses
simple critical sections that can be replaced with global or
per-object spinlocks.  But the userspace code was heavily exploiting
the fact that irq_lock was recursive and could be taken at any time.
So outer functions were doing locking and then calling into inner
helpers that would take their own lock (because they were called from
other contexts that did not lock).

Rather than try to rework this right now, this just creates a set of
spinlocks corresponding to the recursive states in which they are
taken, to preserve the existing semantics exactly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
b29fb220b1 kernel/timer: Spinlockify
Simple global lock around the timer API.  Actually a lot of this usage
was using needless vestigial locking around existing scheduler and
timeout APIs that are now internally synchronized.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f582b55dd6 kernel/pipe: Spinlockify
One spinlock per pipe object.  Also removed some vestigial locking
around _ready_thread().  That call is internally synchronized now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
d27d4e6af2 kernel/sched: Remove remaining irq_lock use
The k_sleep() locking was actually to protect the _current state from
preemption before the context switch, so document that and replace
with a spinlock.  Should probably unify this with the rather cleaner
logic in pend_curr(), but right now "sleeping" and "pended" are
needlessly distinct states.

And we can remove the locking entirely from k_wakeup().  There's no
reason for any of that to need to be synchronized.  Even if we're
racing with other thread modifiations, the state on exit will be a
runnable thread without a timeout, or whatever timeout/pend state the
other side was requesting (i.e. it's a bug, but not one solved by
synhronization).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
be03dbd4c7 kernel/msg_q: Spinlockify
One lock per msgq.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f0933d0ded kernel/stack: Spinlockify
One lock per stack.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
9eeb6b8779 kernel/mbox: Spinlockify
Straightforward per-struct-k_mbox lock.  Nothing changes in locking
strategy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
7df0216d1e kernel/mutex: Spinlockify
Use a subsystem lock, not a per-object lock.  Really we want to lock
at mutex granularity where possible, but (1) that has non-trivial
memory overhead vs. e.g. directly spinning on the mutex state and (2)
the locking in a few places was originally designed to protect access
to the mutex *owner* priority, which is not 1:1 with a single mutex.

Basically the priority-inheriting mutex code will need some rework
before it works as a fine-grained locking abstraction in SMP.

Note that this fixes an invisible bug: with the older code,
k_mutex_unlock() would actually call irq_unlock() twice along the path
where there was a new owner, which is benign on existing architectures
(so long as the key argument is unchanged) but was never guaranteed to
work.  With a spinlock, unlocking an unlocked/unowned lock is a
detectable assertion condition.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
603ea42764 kernel/queue: Spinlockify
Straightforward port.  Each struct k_queue object gets a spinlock to
control obvious data ownership.

Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock.  But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled.  The spinlock API detects that as a recursive lock when
asserts are enabled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f6521a360d kernel/thread_abort: Remove needless locking
The two APIs protected by this lock are themselves internally
synchronized.  Replace the irq_lock with a spinlock anyway, because
what I think it's doing is trying to prevent a race where something
else like an ISR or something it wakes up mucks with the thread before
this completes.  Seems fragile on SMP as it stands, but this preserves
behavior on uniprocessor architectures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
c0bdcbaaf8 kernel/mem_slab: Spinlockify
Use a subsystem lock instead of a per-slab lock for now

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
e456d0f7dd kernel/thread: Spinlockify
Straightforward spinlock around the global thread state.  Two changes
to the locking strategy were needed:

1. There was a needless recursive lock taken in schedule_new_thread().
This is only ever invoked in circumstances where the lock was already
held, or where there is no need for internal synchronization.

2. The recursive irq_lock() around the loop that spawns the initial
static threads (which happens at the start of main thread execution)
was removed.  Most of the job (i.e. making sure the threads don't run
before the loop is finished) was already duplicated by the sched_lock
it was already taking, and the attempt to promise that all the
timeouts happen on the same tick is already true by construction at
system startup on uniprocessor systems, and not possible to guarantee
at all under SMP (where other CPUs can take that timer interrupt).  We
don't document or test for this feature, so don't try to be fancy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
84b47a9290 kernel/mempool: Spinlockify
Really the locking in this file is vestigial.  It only exists because
the scheduler's _unpend_all() call to wake up everyone waiting on a
wait_q is unsynchronized, because it was written to assume
irq_lock-style-locking.  It would be cleaner to put that locking into
the wait_q itself and/or use the scheduler's subsystem lock.  But it's
not clear there's any performance benefit, so let's stick with the
more easily verifiable change first.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f2b1a4bb64 kernel/poll: Spinlockify
Poll gets a single subsystem lock for now.  The existing locking in
Ben's code is subtle, being used both for latency control and for
critical section protection.  So getting each k_poll_event to use a
separate lock will require care and a little logic change.  Do the
simple version for now, which still works to decouple it from the
global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
1bf9bd04b1 kernel: Add _unlocked() variant to context switch primitives
These functions, for good design reason, take a locking key to
atomically release along with the context swtich.  But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly.  On SMP that's a little hurtful as it spams the
global lock.  Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
da37a53a54 kernel/k_sem: Spinlockify
Switch semaphores to use a subsystem spinlock instead of the system
irqlock.

Note that this is only "half way there".  Semaphores will no longer
contend with other irqlock users on SMP systems, but all semaphores
are still sharing the same lock.  Really we want semaphores to be
independently synchronized, but adding 4 bytes to every one (there are
a LOT of these things) for a separate spinlock is too much to pay.

Rather, a proper SMP-aware implementation would spin on the count
variable directly.  But let's not rock that boat quite yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
04382b9a2a kernel/mem_domain: Spinlockify
Simple locking requirements here mean we can just use a single
subsystem lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
32a29d2805 kernel/atomic_c: Spinlockify
Mostly useless patch.  All architectures have their own code for
atomic operations and don't use this fallback.  Still, it's a trivial
locking setup and we might as well.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
a37a981b21 kernel/work_q: Spinlockify
Each work_q object gets a separate spinlock to synchronize access
instead of the global lock.  Note that there was a recursive lock
condition in k_delayed_work_cancel(), so that's been split out into an
internal unlocked version and the API entry point that wraps it with a
lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
5aa7460e5c kernel/spinlock: Move validation out of header inlines
The validation checking recently added to spinlocks is useful, but
requires kernel-internals like _current and _current_cpu in a header
context that tends to be needed before those are declared (or where we
don't want them declared), and is causing big header dependency
headaches.

Move it to C code, it's just a validation tool, not a performance
thing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
53cae5f471 kernel: Use _reschedule() instead of _Swap() where possible
These two spots were duplicating logic that is already done inside
_reschedule(), which is the cleaner, less dangerous API.  Use it where
possible when outside the scheduler internals.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
dc0713a706 kernel: Cleanup. Remove redundant test when calling _Swap()
_Swap() must already handle the case where _get_next_ready_thread() is
the same as _current.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Kumar Gala
bfaaa6bbe9 dts: Convert CONFIG_CCM to DT_CCM
Since we know do DTS before Kconfig we should try and remove dts from
creating Kconfig namespaced symbols and leave that to Kconfig.  So
rename CONFIG_CCM_<FOO> to DT_CCM_<FOO>.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 10:29:57 -06:00
Piotr Zięcik
d02e3ebd4c power: Eliminate SYS_PM_* power states.
The power management framework used two different abstractions
to describe power states. The SYS_PM_* given coarse information
what kind of power state (low power or deep sleep) was used,
while the SYS_POWER_STATE_* abstraction provided information
about particular power mode.

This commit removes the SYS_PM_* abstraction as the same
information is already carried in SYS_POWER_STATE_*.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-08 09:07:00 -05:00
Andrew Boie
41f6011c36 userspace: remove APPLICATION_MEMORY feature
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.

To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
4b4f773484 libc: set up memory partitions
* Newlib now defines a special z_newlib_partition containing
  all globals relevant to newlib. Most of these are in libc.a
  with a heap tracking variable in newlib's hooks.

* Both C libraries now expose a k_mem_partition containing the
  bounds of the malloc heap arena. Threads that want to use
  libc malloc() will need to add this to their memory domain.

* z_newlib_get_heap_bounds has been removed, in favor of the
  memory partition for the heap arena

* ztest now includes the C library partitions in its memory
  domain.

* The mem_alloc test now runs in user mode to prove that this
  all works for both C libraries.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
7ecc359f2c userspace: do not auto-cleanup static objects
Dynamic kernel objects enforce that the permission state
of an object is also a reference count; using a kernel
object without permission regardless of caller privilege
level is a programming bug.

However, this is not the case for static objects. In
particular, supervisor threads are allowed to use any
object they like without worrying about permissions, and
the logic here was causing cleanup functions to be called
over and over again on kernel objects that were actually
in use.

The automatic cleanup mechanism was intended for
dynamic objects anyway, so just skip it entirely for
static objects.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andy Gross
2e8cdc1e7f kernel: Enforce k_mem_slab block size alignment
This patch puts checks in place to ensure that callers to the k_mem_slab
APIs provide word aligned block sizes.  If this is not done, this can
result in unaligned accesses and subsequent crashes.

Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-02-06 07:18:45 -05:00
Ioannis Glaropoulos
6c54cac73d kernel: mem_domain: extend sane_partition for non-overlapping regions
This commit extends the implementation of sane_partition(..) in
kernel/mem_domain.c so that it generates an ASSERT if partitions
inside a mem_domain overlap. This extension is only implemented
for the case when the MPU requires non-overlapping regions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-02-05 09:28:59 -08:00
Anas Nashif
427cc77115 kernel: fix smp build on esp32
set_kernel_idle_time_in_ticks is not used in non SMP code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-02-04 18:16:58 -05:00
Daniel Leung
4bb10eeada kernel/sched: fix CPU mask kconfig typo
The kconfig used in BUILD_ASSERT_MSG() is missing a "S".
So add it back.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-02-04 15:53:09 -05:00
Andy Ross
ab46b1b3c5 kernel/sched: CPU mask affinity/pinning API
This adds a simple implementation of SMP CPU affinity to Zephyr.  The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.

Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway).  Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.

The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 21:37:24 -05:00
Andy Ross
6d9106f288 kernel/init: Fix dummy thread initialization on SMP systems
When under SMP, _current is a macro that indirects to a CPU-specific
address, and that trick won't work until kernel_arch_init() has
returned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 19:10:08 -05:00
Andy Ross
bd049626c5 kernel/sched: Limit idle testing in preemption hot path
Idle threads must (for obvious reasons!) always be preemptible from
the perspective of the scheduler.  But when preemptive scheduling is
disabled, they are given a priority of -1, which is the lowest
COOPERATIVE priority.  So the scheduler preemption logic needed an
extra test for this case and couldn't just rely on the existing
priority comparison.  This was a measurable performance loss, as this
is a hot path on existing benchmarks.

Limit that test to circumstances (!CONFIG_PREEMPT_ENABLED) where it's
actually needed.

Longer term it would be better to just force the existence of one
"preemptible" thread priority always, but right now the number of
priorities and the state of the PREEMPT_ENABLED kconfig flag are
linked, and the existing interrupt return code (with no preemption,
you know with certainty which thread you are returning to and can skip
some work) on some platforms fails when I try this.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Andy Ross
1763a017b4 kernel/sched: Simplify init-time dummy thread & scheduling predicate
For historical reasons, some architectures had a valid _current thread
pointer at initialization time and others didn't.  So the scheduler
logic had a test that checks _current vs. NULL every time it needed to
check premption, when this was only a workaround for initialization
state.

Fix things so that there is a dummy thread always (and clean up the
code to do a struct assignment instead of a memset of bare memory),
and we can remove that test from the scheduler hot path.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Andy Ross
b2791b0ac8 kernel/sched: Force inlining of some routines within the scheduler guts
GCC 6.2.0 is making frustratingly poor inlining decisions with some of
these routines, resulting in an awful lot of runtime calls for code
that is only ever expanded once or twice within the file.

Treat with targetted ALWAYS_INLINE's to force the issue.  The
scheduler code is a hot path.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Andy Ross
eda4c027da misc/dlist: Swap insertion API for a faster one
The sys_dlist_insert_*() functions had a behavior where a NULL
argument for the insertion position to sys_dlist_insert_after/before()
was interpreted as "the end of the list".  We never used that
convention (except in one spot internal to dlist.h which was not
itself used anywhere), and of course already have an API for appending
and prepending to a list.

In practice this was a performance disaster.  The NULL check is
virtually never provable statically by the compiler, so that test and
branch is present always.  And worse, the check and call to another
function was pushing this beyond the complexity limit for gcc to
inline a function (at -Os optimization anyway), forcing us to use
function calls for what should be a ~8 instruction sequence.  The
upshot is that dlist insertions were 2-3x slower than they needed to
be.

Deprecate these older APIs and introduce a new sys_dlist_insert() call
which can be much better optimized.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Andy Ross
8b583acf23 kernel/timeout: Fix another recursive spinlock()
The fix in commit e664c78b82 ("kernel/timeout: Fix recursive
spinlock in z_set_timeout_expiry()") missed a spot that had also been
introduced with recent locking work.  The new
_get_next_timeout_expiry() implementation takes its own lock, which is
recursive when called from z_clock_announce().  Fix by calling the
wrapped implementation instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-30 13:29:42 -08:00
Anas Nashif
c0ea505b2c kernel: fix typo in kconfig name
CONFIG_MULTITHREDING -> CONFIG_MULTITHREADING

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-01-30 13:30:17 -05:00
Peter A. Bigot
b4ece0ad44 kernel: timeout: detect inactive timeouts using dnode linked state
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state.  This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.

Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.

Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Peter A. Bigot
4863aa809c kernel: poll: fix double-remove of node
k_poll events are registered in a linked list when their signal
condition has been met.  The code to clear event registration did not
account for events that were not registered, resulting in double-removes
that produced core dumps on native-posix sanitycheck.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Peter A. Bigot
25fbe7b60d kernel: timeout: remove local fix for double-remove
Use the new generic capability to detect unlinked sys_dnode_t instances.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Peter A. Bigot
692e1033e7 kernel: sched: fix empty list detection
CONTAINER_OF() on a NULL pointer returns some offset around NULL and not
another NULL pointer.  We have to check for that ourselves.

This only worked because the dnode happened to be at the start of the
struct.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Sebastian Bøe
5a58da57fd Kconfig: STACK_CANARIES: Correct the help text
The help text has been stating that CONFIG_STACK_CANARIES will
silently be ignored when the compiler does not support them. But this
is not the desired behaviour of CONFIG_STACK_CANARIES[1].

This patch corrects the help text to state that an error will occur if
this feature is enabled, but not supported.

[1] "I would much rather see the build break if someone tries to
enable the stack canaries, and the compiler doesn't support
it. Because what happens now is that if someone enables this option,
and there is no support, the build will succeed but there are no
actual stack canaries in place, and unless the user is paying close
attention to the cmake test output they will have no idea."
--
https://github.com/zephyrproject-rtos/zephyr/issues/5019

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2019-01-23 09:44:09 +01:00
Sebastian Bøe
1b86fb9da3 cmake: Use variables for target names
There is an effort underway to make most of the Zephyr build script's
reentrant. Meaning, the build scripts can be executed multiple times
during the same CMake invocation.

Reentrancy enables several use-cases, the motivating one is the
ability to build several Zephyr executables, or images, for instance a
bootloader and an application.

For build scripts to be reentrant they cannot be directly referencing
global variables, like target names, but must instead reference
variables, which can vary from entry to entry.

Therefore, in this patch, we replace global targets with variables.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2019-01-19 07:21:55 -05:00
Andy Ross
e664c78b82 kernel/timeout: Fix recursive spinlock in z_set_timeout_expiry()
The z_set_timeout_expiry() function was added in part to simply the
locking strategy, but it missed a case where a function it was calling
was re-locking the same spinlock.  It "works"[1] in uniprocessor
environments, but can be a deadlock in SMP.

Fix this by moving the meat of the function to an unlocked utility,
use that locally, and turn the entry point into one that does locking.
Actually this only gets called from idle now, which is a use case that
will go away when TICKLESS_IDLE is removed as a separate feature (once
you know all timeouts are set tickless, you don't need to set it from
the idle entry at all).

Discovered via lucky inspection.

[1] It doesn't work.  It releases the lock prematurely at the end of
the inner block.  But in practice this wasn't discovered.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-18 06:48:51 -05:00
Peter A. Bigot
bfad9721d2 kernel: remove k_alert API
This API was used in only one place in non-test code.  See whether we
can remove it.

Closes #12232

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-16 21:34:07 -05:00
Adithya Baglody
76ee02b6b3 Gcov: Added Kconfig changes needed by Gcov.
This patch addes the required changes in the Kconfig files.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-16 06:12:33 -05:00
Adithya Baglody
71e90f98fd Gcov: Enable Code coverage reporting over UART.
This patch provides support for generating Code coverage reports.
The prj.conf needs to enable CONFIG_COVERAGE. Once enabled, the
code coverage data dump now comes via UART.
This data dump on the UART is triggered once the main
thread exits.

Next step is to save this data dump on file. Then run
scripts/gen_gcov_files.py with the serial console log as argument.

The last step would be be to run the gcovr. Use the following cmd
 gcovr -r . --html -o gcov_report/coverage.html --html-details

Currently supported architectures are ARM and x86.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-16 06:12:33 -05:00
Andy Ross
7fb8eb57e8 kernel/sched: SWAP_NONATOMIC workaround for timeslicing
Timeslicing works by removing the _current thread from the run queue
and re-adding it at the end of its priority.  On systems with a
_Swap() that can be preempted by a timer interrupt, that means it's
possible for the timeslice to try to slice out a thread that had
already pended itself!

This behavior used to be benign (or at least undetectable) as the
duplicated list operations were idempotent.  But now the dlist code is
stricter about correctness and has exposed the bug -- it will blow up
if you try to remove an already-removed list node.

Fix (on affected platforms) by stashing the _current pointer in
_pend_current_thread() that is checked and cleared in the timer
interrupt.  If we discover we're trying to interrupt a thread that's
already interrupted itself, we can safely exit z_time_slice() as a
noop.  The timeslicing bookeeping was already done for us underneath
the pend code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-15 13:06:35 +01:00
Andy Ross
23c5a63aa8 kernel/sched: Predicate SWAP_NONATOMIC workaround properly
This is a refactoring of the fix in commit 6c95dafd82 to limit its
application to affected platforms now that the root cause is
understood.

Note that the bug that fix was addressing was rare and seen only on
after multi-hour sessions on Michael Scott's test rig.  So if
something regresses, this is where to look!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-15 13:06:35 +01:00
Andy Ross
7f42dbaf48 kernel: Add CONFIG_SWAP_NONATOMIC flag
On ARM, _Swap() isn't atomic and a hardware interrupt can land after
the (irq_locked) caller has entered _Swap() but before the context
switch actually happens.  This will require some platform-specific
workarounds in a few places in the scheduler.

This commit is just the Kconfig and selection on ARM.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-15 13:06:35 +01:00
Andy Ross
762ff2f428 kernel/swap: Simply/robustify return value handling
The call to _arch_switch is a giant screaming sign inviting optimizer
bugs.  The code that appears before is what happened long ago when we
were switched out, but the version that EXECUTED just now is actually
in a different thread.  So the assignment to _current before the
switch actually assigned OUR thread (the "new_thread" of the old
context!) to _current.

But obviously the optimizer looks at that code and assumes that the
_current which got assigned to the thread we were switching to long
ago is still correct, and used it when retrieving the swap return
value.

Obviously the real bug here is that the _arch_switch() in question
lacked a memory clobber (and it's getting one).

But we can remove two lines, remove code from inside the interrupt
lock and make the implementation more robust by moving the read to
after the irq_unlock() (which generally also has a memory clobber).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Andy Ross
4f911e192f kernel: Add missing include
These files were using z_thread_malloc() without including
kernel_internal.h.  On existing architectures that works due to
transitive includes, but x86_64 has a thinner include layer and
doesn't do it for us.  Include the files required for the APIs we use.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Aurelien Jarno
513cceb5d1 kernel: Fix asynchronous event polling interface
Commit 76b3518ce6 ("kernel: Make statements evaluate boolean
expressions") changed the type of is_polling in the struct _poller
from int to bool. In the conversion a "0" has been changed into "true"
instead of "false". Fix that.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2019-01-09 17:06:08 -05:00
Flavio Ceolin
6a4a86e413 kernel: Change k_is_in_isr to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin
09e362e0d0 kernel: Change _is_thread_essential to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin
4f2e9a792a kernel: Change is_condition_met signature
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin
76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin
8a1481735b kernel: userspace: Change _thread_idx_alloc to return bool
Make this function return an essential boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Adithya Baglody
392219eab8 kernel: Change the prototype of k_thread_access_grant.
This API was using variable number of arguments. Which is not
allowed according to misra c guidelines(Rule 17.1). Hence making
this API into a macro and using the util macro FOR_EACH_FIXED_ARG
to get the same functionality.

There is one deviation from the old function. The last argument
shouldn't be NULL.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-03 12:35:14 -08:00
Andy Ross
9eda9350d8 kernel/timeout: Don't reset imminent timeouts
The logic in z_set_timeout_expiry() missed the case where the ticks
argument could be zero (or lower), which can happen naturally due to
timing/interrupt slop.  In those circumstances, it would still try to
reset a timer that was "about to expire at the next tick", which would
run afoul of the drivers' internal decisions about how soon a timer
interrupt could be set, and then get pushed out to the next tick.

Explicitly detect this as an "imminent" predicate to make the logic
clearer.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-03 12:29:02 -05:00
Andy Ross
bb86f2019c kernel/sched: Remove stale comment
The recent change that added a locked z_set_timeout_expiry() API
obsoleted the subtle note about synchronization above
reset_time_slice().  None of that matters any more, the API is
synchronized internally in a conventional way.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-03 12:29:02 -05:00
Andy Ross
71f5e56545 kernel/timeout: Fix "not in list" predication in timeout handling
The use of dticks == INACTIVE to tell whether or not a timeout was
already in the list was insufficient.  There is a time period between
the moment a timeout is removed from the list and the end of its
handler where it is not in the list, yet its list node pointers still
point into it.  Doing things like aborting a thread while that is true
(which can be asynchronous too!)  would corrupt the list even though
all the operations on it were "atomic".

Set the timeout node pointers to nulls atomically when removed, and
check for double-remove conditions (which, again, might be perfectly
OK).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-03 12:29:02 -05:00
Andy Ross
43ab8da953 kernel/timeout: Refactor z_clock_announce() loop
This loop was structured badly, as a while(true) with multiple "exit
if" cases in the body.  It was bad enough that I genuinely fooled
myself into rewriting it, having convinced myself there was a bug in
it when there wasn't.

So keep the rewritten loop which expresses the iteration in a more
invariant way (i.e. "while we have an element to expire" and not "test
if we have to exit the loop").  Shorter and easier.  Also makes the
locking clearer as we can simply release the lock around the callback
in a natural/obvious way.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-03 12:29:02 -05:00
Sebastian Bøe
204f05b23a kconfig: Minor comments and 'help' text fixes
Minor comments and 'help' text fixes.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-12-30 16:24:50 -05:00
Sebastian Bøe
f42ed32dc5 Kconfig: Hide SMP and USE_SWITCH from unsupported platforms
Don't present USE_SWITCH and SMP to user applications that are
configuring for platforms that do not support SMP or USE_SWITCH.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-12-30 16:24:50 -05:00
Sebastian Bøe
21d69579f5 kconfig: Have the 'SMP' option depend on 'USE_SWITCH'
SMP requires the new-style '_arch_switch' to be enabled. To prevent
users from creating invalid configurations where SMP is enabled while
_arch_switch is not, we add a dependency from SMP to USE_SWITCH.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-12-30 16:24:50 -05:00
Sebastian Bøe
4019bda695 kconfig: Disable 'RETPOLINE' on unsupported platforms
RETPOLINE has been enabled by default on most platforms, but it is
only supported on X86.

Features should only be enabled if they are supported and active on
the given platform. To rectify this we have RETPOLINE depend on X86,
the only platform on which it is implemented.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-12-30 16:24:50 -05:00
Anas Nashif
74a74bb6b8 power: rename api sys_soc -> sys_
sys_soc is just redundant, just call APIs with sys_*.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-28 16:16:28 -05:00
Anas Nashif
9151fbebf2 power: rename APIs and removing leading _
Remove leading underscore from PM APIs. _ was used for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-28 16:16:28 -05:00
Andrew Boie
74f114caef userspace: easy checking for specific driver
In general driver system calls are implemented at a subsystem
layer. However, some drivers may have capabilities specific to
the hardware not covered by the subsystem API. Such drivers may
want to define their own system calls.

This macro makes it simple to validate in the driver-specific
system call handlers that not only does the untrusted device
pointer correspond to the expected subsystem, initialization
state, and caller permissions, but also that the device object
is an instance of a specific driver (and not just any driver in
that subsystem).

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-12-27 20:31:58 -05:00
Flavio Ceolin
b82a339813 kernel: init: Add nop instruction in main
The main function is just a weak function that should be override by the
applications if they need. Just adding a nop instructions to explicitly
says that this function does nothing.

MISRA-C rule 2.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-14 13:17:36 +01:00
Flavio Ceolin
4f6020111c kernel: Use NULL instead of 0
MISRA-C rule 11.9

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-11 14:37:10 -08:00
Anas Nashif
69c758436c doc: add kernel version API to doxygen
Put kernel version API into doxygen and make it available as a
documented API.

Fixes #6319

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-08 17:24:53 -05:00
Andrew Boie
a68120de6d kernel: check retval of driver init
If initialization fails, zero the API struct so that
device_get_binding() can't fetch it, and do not mark
the driver object as initialized to user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-12-07 19:33:23 -05:00
Adithya Baglody
91c5b84cd5 kernel: init.c: Added required hooks for the relocation
This patch splits the text section into 2 parts. The first section
will have some info regarding vector tables and debug info. The
second section will have the complete text section.
This is needed to force the required functions and data variables
the correct locations.
This is due to the behavior of the linker. The linker will only link
once and hence this text section had to be split to make room
for the generated linker script.

Added a new Kconfig CODE_DATA_RELOCATION which when enabled will
invoke the script, which does the required relocation.

Added hooks inside init.c for bss zeroing and data copy operations.
Needed when we have to copy data from ROM to required memory type.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-12-07 10:32:41 -05:00
Flavio Ceolin
118715c62d misra: Fixes for MISRA-C rule 8.3
MISRA-C says all declarations of an object or function must use the
same name and qualifiers.

MISRA-C rule 8.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin
4b35dd2628 misra: Fixes for MISRA-C rule 8.2
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.

MISRA-C rule 8.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin
26be3355ac kernel: sched: Fix undefined behavior
The order of evaluation of function calls in the arguments of a
function. This is undefined (32)/ unspecified(15-18) in C99.

MISRA-C rule 13.2 does not allow that a value of an expression and its
side effects happens in not deterministic order to avoid these
undefined behaviors.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin
d7271ec9db kernel: poll: Fix switch usage
According with MISRA-C and unconditional break statement must
terminate every switch-clause.

MISRA-C rule 16.1 and 16.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin
a42de6466a kernel: queue: Fix MISRA-C violation
MISRA-C requires the right-hand operand of && or || operator does not
contain persistent effect.

MISRA-C rule 13.5

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Ioannis Glaropoulos
ccf813c22a kernel: mem_domain: remove redundant clearing of mem_partition fields
When a memory partition is removed, it is not required
to clear the start and attr fields, since a free partition
is only indicated by a zero size field. This commit removes
the un-necessary clearing of start and attr fields.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-12-05 15:15:07 -05:00
Nicolás Bértolo
258fd2dbeb kernel: mutex: delay setting lock_count = 0.
It is necessary to delay setting lock_count = 0 because an unlocking thread
maybe swapped out when it calls adjust_owner_prio(). If the thread that starts
running sees lock_count = 0 it will successfully acquire the mutex even though
it is not fully unlocked yet.

Fixes #11798.

Signed-off-by: Nicolás Bértolo <nicolasbertolo@gmail.com>
2018-12-05 11:00:10 +01:00
Patrik Flykt
d0d9eb0e38 kernel: Add 'U' to unsigned variable assignments
Add 'U' to a value when assigning it to an unsigned variable.
MISRA-C rule 7.2

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2018-12-04 22:51:56 -05:00
Peter A. Bigot
1fa00f3b36 kernel: remove outdated comment in _Cstart
The comment explaining why _IntLibInit was being invoked was left in
place after the invocation itself was removed.  Remove it too.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2018-12-03 09:18:06 -08:00
Flavio Ceolin
b7287ceb4e kernel: syscall: Object validation checks boolean statement
The function that checks if an object is valid is essentially a boolean
function. Just changing its return type to reflect it.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-30 08:05:11 -08:00
Flavio Ceolin
80418602ed kernel: sched: Make boolean functions return bool
MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-30 08:05:11 -08:00
Andrew Boie
2b1d54e897 kernel: add user mode work_q capability
This allows for workqueues to be started in user mode.
No additional kernel objects or system calls are defined
other than starting the workqueue in user mode; for
permission purposes the embedded queue and thread objects
are sufficient.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Andrew Boie
8acf899a0d workqueues: remove object init calls
k_work and k_work_q are not kernel objects, nor will they
be. k_work_q contains some kernel objects which are tracked
independently.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Flavio Ceolin
46715faa5c kernel: Remove _IntLibInit function
There were many platforms where this function was doing nothing. Just
merging its functionality with _PrepC function.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-28 14:59:10 -08:00
Pawel Dunaj
baea22407d kernel: Always set clock expiry with sync with timeout module
System must not set the clock expiry via backdoor as it may
effect in unbound time drift of all scheduled timeouts.

Fixes: #11502

Signed-off-by: Pawel Dunaj <pawel.dunaj@nordicsemi.no>
2018-11-26 12:24:59 +01:00
Piotr Mienkowski
970aef2905 kernel: ensure System Power Managment enables Tickless Idle.
System Power Management is only supported in Tickless Idle mode.
This patch modifies Kconfig dependencies to ensure System Power
Management option selects Tickless Idle one.

Fixes: #11046

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2018-11-21 23:16:35 -05:00
Andy Ross
02165d76a0 kernel/timeout: Fix race with clock timeout setting
The call to z_clock_set_timeout() was being made outside the timeout
lock, which can race against other contexts setting sooner-expiring
timeouts.

Also add a long comment to one spot (timeslicing) where this call is
made outside the timeout spinlock (inside the scheduler lock) and why
this is OK.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-11-21 12:52:49 +01:00
Sathish Kuttan
a8aa235d9b kernel: msg_q: k_msgq_peek() implementation
Add implementation for k_msgq_peek() which is similar to k_msgq_get()
except the message is not deleted from the queue.

Signed-off-by: Sathish Kuttan <sathish.k.kuttan@intel.com>
2018-11-19 17:53:22 -05:00
Andrew Boie
42cfd4ff26 kernel: expose k_busy_wait() to user mode
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-15 16:20:36 -05:00
Andrew Boie
f253d0779d k_mem_slab: track as a kernel object
We aren't going to allow any user mode access to the
k_mem_slab APIs, but in some cases (specifically in the
case of the I2S subsystem) we need to allow user mode
to assign a memory slab to a particular driver.

This will let us verfiy (in supervisor mode) that a provided
k_mem_slab pointer is really a k_mem_slab, and know its
initialization state, and have permissions assigned to it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-15 16:20:36 -05:00
Ioannis Glaropoulos
d8b51ea9cd kernel: mem_domain: optimize sane partition checking
This commit optimizes the process of checking that the
added partitions in a mem_domain are sane. It places the
sane_partition checking inside the loop of adding the
partitions in the mem_domain, so that the checkings are
not performed twice, and no partition is checked against
itself.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-11-15 08:18:59 -05:00
Ioannis Glaropoulos
3e390ed42d kernel: mem_domain: fix partition end address calculations
This commit fixes the calculations of the partition ending
addresses in two places in the code, according to:
<last> = <start> + <size> - 1. We also rename 'end' to 'last'
to stress that we calculate the last address in the partition.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-11-15 08:18:59 -05:00
Andrew Boie
9d14874db1 kernel: expose device_get_binding() to user mode
User mode may need to use this API to get a handle on
devices by name, expose as a system call. We impose
a maximum name length as the system call handler needs
to make a copy of the string passed in from user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-13 19:06:05 -05:00
Andy Ross
1c3051459b kernel/sched: Fix race in k_sched_time_slice_set()
If this function is itself interrupted by a timeslice event, the
slicing state can be corrupted.  Just re-use the scheduler lock
instead of using a new spinlock; this is a low-latency function that
won't deadlock.  Found by inspection.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-11-13 17:10:07 -05:00
Andy Ross
c0a184c067 drivers/timer: Select tickless via driver kconfig flag
Add a TICKLESS_CAPABLE kconfig variable which is used by the kernel to
select tickless mode's default automatically on drivers that support
it (rather than having to set the default per-board).  Select it from
the ARM SysTick and Intel HPET drivers.

Also remove the old qemu_cortex_m3 default settings which this
replaces.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-11-13 17:10:07 -05:00
Flavio Ceolin
22236c9d6d kernel: Make tag identifiers unique
Some places are using the same tag identifier with different types.
This is a MISRA-C violation and makes the code less readable.

MISRA-C rule 5.7

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-06 16:20:15 -05:00
Flavio Ceolin
aecd4ecb8d kernel: Change k_poll_signal api
k_poll_signal was being used by both, struct and function. Besides
this being extremely error prone it is also a MISRA-C violation.
Changing the function to contain a verb, since it performs an action
and the struct will be a noun. This pattern must be formalized and
followed and across the project.

MISRA-C rules 5.7 and 5.9

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin
dfbe03249d kernel: stack: Making if's body a compound statement
MISRA-C rule 15.6

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin
a406b88fca kernel: Remove duplicated identifier
There was an struct and a variable called _kernel. This is error prone
and a MISRA-C violation. It is changing the struct to have a unique
identifier.

MISRA-C rule 5.8

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin
ac14685211 kernel: Delimiting the scope of some variables
According with MISRA-C an object should be defined in a block scope if
it is used in a single function.

MISRA-C rule 8.9

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin
4369363f6c kernel: mutex: Change variable declaration
This is not violating any MISRA-C rule, though, it seems to be
triggering a false (rule 9.1) positive in some static analysis
tools. Nevertheless, it is more readable declare all variables in the
same scope together.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin
a3dddedab6 kernel: Use distinct macro names
There is a struct and a macro called _ready_q, this is error
prone. Just removing it.

MISRA-C rule 5.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-31 19:43:47 -04:00
Adithya Baglody
87e592ebda kernel: mutex.c: MISRA C compliance.
This patch fixes few MISRA issues present in mutex.c.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Adithya Baglody
fac6885e55 kernel: alert: Declare tracing variables only when needed.
The tracing variable in alert.c was declared by default. This
should have been declared only when CONFIG_OBJECT_TRACING is set.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Adithya Baglody
2a78b8d86f kernel: queue: MISRA C compliance.
This patch fixes few issues in queue.c. This patch also changes
the return type of k_queue_alloc_append and k_queue_alloc_prepend
from int to s32_t.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Adithya Baglody
8feda92abc kernel: device: MISRA C compliance.
This patch fixes few issues in device.c.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Piotr Zięcik
7700eb2a15 kernel: sched: Make k_sleep() similar to POSIX equivalent
This commit introduces k_sleep() return value, which provides
information about actual sleep time. If the returned value is
not-zero, the thread slept shorter than requested, which is
only possible if the thread has been woken up by k_wakeup() call.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-10-30 18:27:31 +01:00
Marek Pieta
e87193896a subsys: debug: tracing: Fix thread tracing
Change fixes issue with thread execution tracing.

Signed-off-by: Marek Pieta <Marek.Pieta@nordicsemi.no>
2018-10-29 22:09:12 -04:00
Ioannis Glaropoulos
3e02f38a38 kernel: mem_domain: minor typo fixes
Fixing a few minor typo fixes in kernel/mem_domain.c
and the respective documentation section.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-10-29 12:34:12 -04:00
Kumar Gala
0776392e4c kernel: msq_q: Fix compile warning
Fix a compile warning if we build using int types defined to match the
compiler.  We get the following warnings:

kernel/msg_q.c: In function ‘_impl_k_msgq_alloc_init’:
kernel/msg_q.c:75:9: warning: passing argument 3 of ‘__builtin_umul_overflow’ from incompatible pointer type [-Wincompatible-pointer-types]
         (u32_t *)&total_size)) {
         ^
kernel/msg_q.c:75:9: note: expected ‘unsigned int *’ but argument is of type ‘u32_t * {aka long unsigned int *}’

__builtin_umul_overflow expects to be passed unsigned int for all its
arguments, so cast to that instead of u32_t.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-10-29 10:52:00 -04:00
Spoorthi K
b6cd192fa5 kernel: sched: Fix compiler warning
Ignore return value of _Swap() as it is not
used anywhere.

Signed-off-by: Spoorthi K <spoorthi.k@intel.com>
2018-10-24 09:48:17 +01:00
Paul Sokolovsky
d91c11f5bf kernel: system_work_q: Set dedicated "sysworkq" name.
Previously, a generic "workqueue" name was used, but there're few
workqueues in a typical Zephyr setup, and it wasn't possible to
distinguish them.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-10-19 07:58:45 -04:00
Adithya Baglody
4b066212b6 kernel: sem: Fix few MISRA C violations.
This patch fixes few of the violations inside sem.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 12:17:58 -04:00
Adithya Baglody
6176692f4b kernel: ksched.h: Incorrect argument type in _pend_current_thread
In _pend_current_thread the argument key is always a unsigned
interger type and this function forces it to become a signed
interger. This is a dangerous behavior and cant be trusted to
work as expected.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 12:17:58 -04:00
Andy Ross
386894c2fb kernel/timeout: Fix build breakage due to stdio name collision
Duh: "remove()" is a POSIX symbol, and on at least some platforms
stdio.h can be included here out of platform headers causing a name
collision.

Fixes #10669's direct issue, though the broader issue of how to choose
names for statics remains controversial.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-17 12:15:44 -04:00
Paul Sokolovsky
b779ea2d19 kernel: syscall_handler.h: Typo fix in docstring
Should be "fails" instead of "files".

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-10-17 10:32:10 -04:00
Adithya Baglody
28080d3896 kernel: MISRA C: Fixes a few MISRA C issues.
MISRA C guideline compliance for various rules.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Adithya Baglody
1424561252 kernel: sched: Fixed incorrect argument type of _reschedule()
This API shouldn't take a int type but instead it should take
u32_t. This argument has to be similar to irq_lock() and
irq_unlock().

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Andy Ross
7a035c0dc7 kernel/sched: Fix timeslice accounting for already-elapsed ticks
In tickless mode, not all elapsed ticks may have been announced yet,
so future z_time_slice() calls will include "extra" ticks that we have
to account for when setting up the slice count.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
7617371ecc kernel/timeout: Clamp ticks argument to lower bound
Our funny convention holds that passing ticks==1 to _add_timeout()
means "at the next tick".  But that means that 1, 0, and all negative
numbers are expected to behave the same.  In ticked mode, that's fine
because it will, after all, expire at the next tick.

But in tickless, the next announcement may be for several ticks, and
that zero will appear to expire "before" the next tick in the
consumption loop.

Make sure all "next tick" expirations look the same.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
9ce9677888 kernel/timeout: Fix elapsed logic
When fetching the next timeout to expire, the value is relative to the
last announced tick, so you subtract the timer-provided elapsed time
to get the true delta from "now".  When adding a new timeout, you
*have* a value relative to now, so you compute the delta vs. the last
announced tick by adding the elapsed() time.  Duh.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
1cfff07480 kernel/timeout: Fix announcement tick logic
This was wrong in subtle ways.  In tickless mode it's possible to get
an announcement for multiple ticks at a time and have multiple
callbacks to execute that were technically scheduled at different
times.  We want to fix the current tick at the value represented by
the currently-executing callback's EXPIRATION (even if we missed it!),
so that any new timeouts it sets (c.f. a k_timer period) happen at the
right point, in phase with the expected series.  In single-tick mode
the code ends up the same always, so the bug wasn't visible.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
d8421adadd kernel/timeout: Fix synchronization in z_tick_get_32()
The previous comment correctly and carefully explained why the 64 bit
value in curr_tick doesn't require locking when reading only the low
32 bits.

It completely missed the fact that the calculation of elapsed time and
the read of curr_tick ABSOLUTELY DO require locking, because the
former is expressed in terms of the latter.  This was always bug, even
in the old code, but never witnessed because we ran so little software
in tickless mode.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
1129ea9394 kernel/sched: Fix timeslicing predicate
It's possible to interrupt a thread that has already scheduled a
timeout.  Really this is a race against the usage of
_add_thread_timeout() and needs some design work to provide proper
locking (which is a distinct requirement from the scheduler lock and
timeout lock!), as the users of that API are spread around the kernel.
But existing usage always schedules the timeouts first, so this is
safe.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
2dd9e2cad4 kernel/sched: Remove spurious locking
The timeout APIs are properly synchronized now.  This irq_lock() (and
the comment explaining it) isn't needed anymore.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
08397277fc kernel/kconfig: Move TICKLESS options out of power management tree
These options are rapidly becoming a default configuration, which is
complicated by having them be hidden inside of a SYS_POWER_MANAGEMENT
variable that has to be enabled first.  Put them at the top level of
the kernel config.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
cfe62038d2 kernel: Checkpatch fixups
I was pretty careful, but these snuck in.  Most of them are due to
overbroad string replacements in comments.  The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
987c0e5fc1 kernel: New timeout implementation
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version.  The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed.  Advantages:

* Properly spinlocked and SMP-aware.  The earlier timer implementation
  relied on only CPU 0 doing timeout work, and on an irq_lock() being
  taken before entry (something that was violated in a few spots).
  Now any CPU can wake up for an event (or all of them) and everything
  works correctly.

* The *_thread_timeout() API is now expressible as a clean wrapping
  (just one liners) around the lower-level interface based on function
  pointer callbacks.  As a result the timeout objects no longer need
  to store backpointers to the thread and wait_q and have shrunk by
  33%.

* MUCH smaller, to the tune of hundreds of lines of code removed.

* Future proof, in that all operations on the queue are now fronted by
  just two entry points (_add_timeout() and z_clock_announce()) which
  can easily be augmented with fancier data structures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
52e444bc05 kernel: Move timeout_remaining API
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).

Move it to where it belongs.  Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API.  And rename it to
a more standard zephyr name.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
96013b0375 system_timer.h: Change "now" uptime API to be simpler for drivers
The current z_clock_uptime() call (recently renamed from
_get_elapsed_program_time) requires the driver to track a full 64 bit
uptime value in ticks, which is entirely separate from the one the
kernel is already keeping.

Don't do that.  Just ask the drivers to track uptime since the last
call to z_clock_announce(), since that is going to map better to
built-in hardware capability.

Obviously existing drivers already have this feature, so they're
actually getting slightly larger in order to implement the new API in
terms of the old one.  But future drivers will thank us.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
fe82f1c2af kernel/timeout: Refactor API
Add the callback parameter to add_timeout(), and remove the thread
argument.  Now the "low level" timeout API can be expressed without
reference to threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
5d203523b6 kernel/timeout: Eliminate wait_q parameters from API
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
d61b1f8ef8 kernel/timeout: Remove timeout wait_q field
Per previous patch, this is known to be identical with
thread->pended_on.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
15d520819d kernel/timeout: Prepare unification of timeout/thread wait_q fields
The existing timeout API wants to store a wait_q on which the thread
is waiting, but it only uses that value in one spot (and there only as
a boolean flag indicating "this thread is waiting on a wait_q).

As it happens threads can already store their own backpointers to a
wait_q (needed for the SCALABLE scheduler backend), so we should use
that instead.

This patch doesn't actually perform that unification yet.  It
reorgnizes things such that the pended_on field is always set at the
point of timeout interaction, and adds a bunch of asserts to make 100%
sure the logic is correct.  The next patch will modify the API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
2ae8f50936 kernel/include: Move stubs for timeout functions to their declarations
The timeout_q.h scheme, where it declared real functions, but the
stubs for when there was no clock were in wait_q.h was senselessly
weird.  Put them in the same file.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
9098a45c84 kernel: New timeslicing implementation
Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen.  In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs).  Advantages:

1. Much simpler logic.  Significantly smaller code.  No variance or
   dependence on tickless modes or timer driver (beyond setting a
   simple timeout).

2. No arch-specific assembly integration with _Swap() needed

3. Better performance on many workloads, as the accounting now happens
   at most once per timer interrupt (~5 Hz) and true rescheduling and
   not on every unrelated context switch and interrupt return.

4. It's SMP-safe.  The previous scheme kept the slice ticks as a
   global variable, which was an unnoticed bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
8b54953e4b kernel/sys_clock: Fix build when !SYS_CLOCK_EXISTS
This got broken.  Add some #ifery to handle the case.  Not clean, will
clean up in a future pass once the API is final.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
25863549be kernel: Remove clock_always_on control from k_busy_wait()
This feature was a useless noop based on mistaken API understanding.

The idea seems to have been that k_busy_wait() included guards to
ensure "clock_always_on" was true duing the loop, presumably because
the original author was afraid that "turning the clock off" would
affect the operation of k_cycle_get_32().

Then later someone came around and "optimized" this for Quark SE,
where the cycle counter is the RTC and unrelated to the timer driver
used by the clock_always_on feature.  (Except even there it presumably
should have been done at the SoC level and not just in the C1000
devboard -- note that Arduino 101 never would have gotten this).

But it was all a mistake: "clock_always_on" has nothing to do with
en/disabling the system cycle timer (which never happens when the
system is active, that's a feature of idle), it's a control over the
delivery of timer interrupts.  And needless to say we don't care about
timer interrupts when we're spinning on a cycle counter.

Yank the whole mess.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
1b3149cea1 kernel/sys_clock.c: Add asserts to watch dueling "set time" APIs
The current API has an rather unfortuate collision between two APIs:
z_clock_announce(), which is called out of the timer interrupt to
inform the kernel of time passage (and which is responsible for
invoking timer callbacks), and z_tick_set(), which is ALSO used by the
timer drivers for... confusing and inconsistent purposes.

This is sort of a mess.  The tick_set API needs to go away, but before
that I'm adding some assertions to at least make sure the existing
drivers are using them consistently.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
1c08aefe56 kernel/timeoutq: Uninline the timeout methods
There was no good reason to have these rather large functions in a
header.  Put them into sys_clock.c for now, pending rework to the
system.

Now the API is clearly visible in a small header.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
7aae75bd1c idle: Fix tickless timeout behavior
If the idle code was detecting that it needed to sleep for less than
CONFIG_SYS_TICKLESS_IDLE_THRESH, then it would never call
z_clock_set_timeout() at all, which means that the system would never
wake up unless it already had a timeout scheduled!  Apparently we
lacked a test case to detect this condition.

Honestly this seems like a crazy feature to me.  There's no benefit in
delivering needless tick announcements.  If the system has the
capacity to enter deeper sleep for long timeouts, that's already
exposed via the PM APIs, the timer subsystem needn't be involved.
But... we actually have a test (tickless_concept) that looks at this,
so support it for now and consider deprecation later.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
d7b35c9bd6 idle: Remove needless "expired" logic in sys_power_save_idle()
This code (just refactored as part of the timer API work) turns out to
be needless.  It's trying to detect the case where we're being asked
to idle for zero time, but that's not possible with a properly
functioning timer driver: the call to z_clock_announce() must happen
out of an interrupt, and this is the idle thread, which must sit below
any possible interrupt priority.  The call to z_clock_uptime() must
not ever return "too late" until after the timer interrupt has fired,
at which point we'll be inspecting the next timeout (which itself is
guaranteed to be in the future for the same reason).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
722a888ef7 timer: Clean up hairy tickless APIs
The tickless driver had a bunch of "hairy" APIs which forced the timer
drivers to do needless low-level accounting for the benefit of the
kernel, all of which then proceeded to implement them via cut and
paste.  Specifically the "program_time" calls forced the driver to
expose to the kernel exactly when the next interrupt was due and how
much time had elapsed, in a parallel API to the existing "what time is
it" and "announce a tick" interrupts that carry the same information.

Remove these from the kernel, replacing them with synthesized logic
written in terms of the simpler APIs.

In some cases there will be a performance impact due to the use of the
64 bit uptime call, but that will go away soon.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
1a1a9539ea include/system_timer.h: Timer API cleanup
Rename timer driver API functions to be consistent.  ADD DOCS TO THE
HEADER so implementations understand what the requirements are.
Remove some unused functions that don't need declarations here.

Also removes the per-platform #if's around the power control callback
in favor of a weak-linked noop function in the driver initialization
(adds a few bytes of code to default platforms -- we'll live, I
think).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
ab488277bc drivers/timer: Unify timeout setting APIs
The existing API had two almost identical functions: _set_time() and
_timer_idle_enter().  Both simply instruct the timer driver to set the
next timer interrupt expiration appropriately so that the call to
z_clock_announce() will be made at the requested number of ticks.  On
most/all hardware, these should be implementable identically.

Unfortunately because they are specified differently, existing drivers
have implemented them in parallel.

Specify a new, unified, z_clock_set_timeout().  Document it clearly
for implementors.  And provide a shim layer for legacy drivers that
will continue to use the old functions.

Note that this patch fixes an existing bug found by inspection: the
old call to _set_time() out of z_clock_announce() failed to test for
the "wait forever" case in the situation where clock_always_on is
true, meaning that a system that reached this point and then never set
another timeout would freeze its uptime clock incorrectly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
fa99ad66d0 sys_clock: Fix up tick announce API
There were three separate "announce ticks" entry points exposed for
use by drivers.  Unify them to just a single z_clock_announce()
function, making the "final" tick announcement the business of the
driver only, not the kernel.

Note the oddness with "_sys_idle_elapsed_ticks": this was a global
variable exposed by the kernel.  But it was never actually used by the
kernel.  It was updated and inspected only within the timer drivers,
and only so that it could be passed back to the kernel as the default
(actually hidden) argument to the announce function.  Break this false
dependency by putting this variable into each timer driver
individually.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
317178b88f sys_clock: Fix unsafe tick count usage
The system tick count is a 64 bit quantity that gets updated from
interrupt context, meaning that it's dangerously non-atomic and has to
be locked.  The core kernel clock code did this right.

But the value was also exposed to the rest of the universe as a global
variable, and virtually nothing else was doing this correctly.  Even
in the timer ISRs themselves, the interrupts may be themselves
preempted (most of our architectures support nested interrupts) by
code that wants to set timeouts and inspect system uptime.

Define a z_tick_{get,set}() API, eliminate the old variable, and make
sure everyone uses the right mechanism.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
b8ffd9acd6 sys_clock: Make clock_always_on true by default
This flag is an indication to the timer driver that the OS doesn't
care about rollover conditions of the tick count while idling, so the
system doesn't need to wake up once per counter flip[1].  Obviously in
that circumstance values returned from k_uptime_get_32() are going to
be wrong, so the implementation had an assert to check for misuse.

But no one understood that from the docs, so the only place these APIs
were used in practice were as "guards" around code that needed to call
k_uptime_get_32(), even though that's 100% wrong per docs!

Clarify the docs.  Remove the incorrect guards.  Change the flag to
initialize to true so that uptime isn't broken-by-default in tickless
mode.  Also move the implemenations of the functions out of the
header, as there's no good reason for these to need to be inlined.

[1] Which can be significant.  A 100MHz ARM using the 24 bit SysTick
    counter rolls over at about 6 Hz, and if it had to come out of
    idle at that rate it would be a significant power issue that would
    swamp the gains from tickless.  Obviously systems with slow
    counters like nRF or 64 bit ones like RISC-V or x86's TSC aren't
    as affected.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
b2e4283555 sys_clock: Make sys_clock_hw_cycles_per_tick() a proper API
This was another "global variable" API.  Give it function syntax too.
Also add a warning, because on nRF devices (at least) the cycle clock
runs in kHz and is too slow to give a precise answer here.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
393ec71ec3 clock: Remove CONFIG_TICKLESS_KERNEL_TIME_UNIT_IN_MICRO_SECS
This was only used in a few places just to indirect the already
perfectly valid SYS_CLOCK_TICKS_PER_SEC value.  There's no reason for
these to ever have been kconfig units, and in fact the distinction
appears to have introduced a hidden/untested bug in the power
subsystem (the two variables were used interchangably, but they were
defined in reciprocal units!).

Just use "ticks" as our time unit pervasively, and clarify the docs to
explain that.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
220d4f8347 sys_clock.h: Make "global variable" APIs into proper functions
The existing API defined sys_clock_{hw_cycles,ticks}_per_sec as simple
"variables" to be shared, except that they were only real storage in
certain modes (the HPET driver, basically) and everywhere else they
were a build constant.

Properly, these should be an API defined by the timer driver (who
controls those rates) and consumed by the clock subsystem.  So give
them function syntax as a stepping stone to get there.

Note that this also removes the deprecated variable
_sys_clock_us_per_tick rather than give it the same treatment.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Anas Nashif
c77c043071 kernel: remove deprecated k_thread_cancel
Remove deprecated function k_thread_cancel. We now use k_thread_abort.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-10-09 13:58:01 -04:00
Anas Nashif
0a0c8c831f kernel: move to new logger
Use the new logger framework for kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-10-08 17:49:12 -04:00
Flavio Ceolin
18af4c6299 kernel: Fix overflow test problem introduced in 92ea2f9
The builtin function __builtin_umul_overflow returns a boolean and
should not checked as an integer.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-04 05:20:29 -07:00
Flavio Ceolin
061a2c5b63 kernel: mempool: Remove unnecessary condition check
Removing an unnecessary check in k_mem_pool_alloc. The condition is
already being checked in the if.

MISRA-C rule 14.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 07:58:19 +05:30
Flavio Ceolin
3e97acc7f2 kernel: Sanitize if else statement according with MISRA-C
A final else statement must be provided when an if statement is
followed by one or more else if.

MISRA-C rule 15.7

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 07:58:19 +05:30
Flavio Ceolin
0b14866437 kernel: sys_clock: Remove unnecessary if
When delta_ticks_from_prev is not 0, the variable ticks will necessarily
be lesser or equal 0, so the if checking that is no necessary.

MISRA-C rule 15.7

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 07:58:19 +05:30