Commit Graph

142 Commits

Author SHA1 Message Date
Danny Oerndrup
c9d78401cc spinlock: Make SPIN_VALIDATE a Kconfig option.
SPIN_VALIDATE is, as it was previously, enabled per default when having
less than 4 CPUs and either having no flash or a flash size greater than
32kB.

Small targets, which needs to have asserts enabled, can chose to have
the spinlock validation enabled or not and thereby decide whether the
overhead added is acceptable or not.

Signed-off-by: Danny Oerndrup <daor@demant.com>
2019-12-20 19:51:16 -05:00
Andrew Boie
b5c681071a kernel: don't use u32_t for data sizes
Use a size_t instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-12 14:48:42 -08:00
Andrew Boie
e48ed6a980 kernel: use uintptr_t for kobject data
This has to be wide enough to store a pointer.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-12 14:48:42 -08:00
Andy Ross
50d0942f5e kernel/thread: Cancel timeouts on k_thread_suspend(), make schedule point
When suspending a thread, cancel any pending timeouts which might wake
it up unexpectedly.  Also, make suspending the current thread
(specifically) a schedule point, as callers are clearly going to
expect that to be synchronous.

Also fix a documentation weirdness.  The phrasing in the earlier docs
for k_thread_suspend() was confusing: it could be interpreted as
either document the current (essentially buggy) behavior that threads
will "wake up" due to preexisting timeouts, OR to mean that thread
timeouts will continue to be tracked so that resuming a thread that
was sleeping will continue to sleep until the timeout (something that
has never been implemented: k_sleep() is implemented on top of
suspend).  Rewrite to document what we actually implement.

Fixes #20033

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-11-25 19:12:05 -05:00
Flavio Ceolin
91fd6d0866 kernel: thread: Fix randomness problem with stack pointer random
In some platforms the size of size_t can be different of 4 bytes. Use
sys_rand_get to proper fill this variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-11-15 13:43:32 -08:00
Andrew Boie
e09a0255da kernel: sychronize irq_offload() access
Entering irq_offload() on multiple CPUs can cause
difficult to debug/reproduce crashes. Demote irq_offload()
to non-inline (it never needed to be inline anyway) and
wrap the arch call in a semaphore.

Some tests which were unnecessarily killing threads
have been fixed; these threads exit by themselves anyway
and we won't leave the semaphore dangling.

The definition of z_arch_irq_offload() moved to
arch_interface.h as it only gets called by kernel C code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-08 15:16:43 -08:00
Andy Ross
8892406c1d kernel/sys_clock.h: Deprecate and convert uses of old conversions
Mark the old time conversion APIs deprecated, leave compatibility
macros in place, and replace all usage with the new API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-11-08 11:08:58 +01:00
Andrew Boie
4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Stephanos Ioannidis
37d6241ecf kernel: Un-inline z_new_thread_init.
This commit modifies the z_new_thread_init function, that was
previously declared as ALWAYS_INLINE to be a normal function.

z_new_thread_init function is only called by the z_arch_new_thread
function and, since this is not a performance-critical function, there
is no good justification for inlining it.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Stephanos Ioannidis
2d7460482d headers: Refactor kernel and arch headers.
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.

The refactoring strategy used in this commit is detailed in the issue

This commit introduces the following major changes:

1. Establish a clear boundary between private and public headers by
  removing "kernel/include" and "arch/*/include" from the global
  include paths. Ideally, only kernel/ and arch/*/ source files should
  reference the headers in these directories. If these headers must be
  used by a component, these include paths shall be manually added to
  the CMakeLists.txt file of the component. This is intended to
  discourage applications from including private kernel and arch
  headers either knowingly and unknowingly.

  - kernel/include/ (PRIVATE)
    This directory contains the private headers that provide private
   kernel definitions which should not be visible outside the kernel
   and arch source code. All public kernel definitions must be added
   to an appropriate header located under include/.

  - arch/*/include/ (PRIVATE)
    This directory contains the private headers that provide private
   architecture-specific definitions which should not be visible
   outside the arch and kernel source code. All public architecture-
   specific definitions must be added to an appropriate header located
   under include/arch/*/.

  - include/ AND include/sys/ (PUBLIC)
    This directory contains the public headers that provide public
   kernel definitions which can be referenced by both kernel and
   application code.

  - include/arch/*/ (PUBLIC)
    This directory contains the public headers that provide public
   architecture-specific definitions which can be referenced by both
   kernel and application code.

2. Split arch_interface.h into "kernel-to-arch interface" and "public
  arch interface" divisions.

  - kernel/include/kernel_arch_interface.h
    * provides private "kernel-to-arch interface" definition.
    * includes arch/*/include/kernel_arch_func.h to ensure that the
     interface function implementations are always available.
    * includes sys/arch_interface.h so that public arch interface
     definitions are automatically included when including this file.

  - arch/*/include/kernel_arch_func.h
    * provides architecture-specific "kernel-to-arch interface"
     implementation.
    * only the functions that will be used in kernel and arch source
     files are defined here.

  - include/sys/arch_interface.h
    * provides "public arch interface" definition.
    * includes include/arch/arch_inlines.h to ensure that the
     architecture-specific public inline interface function
     implementations are always available.

  - include/arch/arch_inlines.h
    * includes architecture-specific arch_inlines.h in
     include/arch/*/arch_inline.h.

  - include/arch/*/arch_inline.h
    * provides architecture-specific "public arch interface" inline
     function implementation.
    * supersedes include/sys/arch_inline.h.

3. Refactor kernel and the existing architecture implementations.

  - Remove circular dependency of kernel and arch headers. The
   following general rules should be observed:

    * Never include any private headers from public headers
    * Never include kernel_internal.h in kernel_arch_data.h
    * Always include kernel_arch_data.h from kernel_arch_func.h
    * Never include kernel.h from kernel_struct.h either directly or
     indirectly. Only add the kernel structures that must be referenced
     from public arch headers in this file.

  - Relocate syscall_handler.h to include/ so it can be used in the
   public code. This is necessary because many user-mode public codes
   reference the functions defined in this header.

  - Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
   necessary to provide architecture-specific thread definition for
   'struct k_thread' in kernel.h.

  - Remove any private header dependencies from public headers using
   the following methods:

    * If dependency is not required, simply omit
    * If dependency is required,
      - Relocate a portion of the required dependencies from the
       private header to an appropriate public header OR
      - Relocate the required private header to make it public.

This commit supersedes #20047, addresses #19666, and fixes #3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Andrew Boie
cb1dd7465b kernel: remove vestigal printk references
Logging is now used for these situations.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 16:15:06 -05:00
Andrew Boie
e1ec59f9c2 kernel: renamespace z_is_in_isr()
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().

References from test cases changed to k_is_in_isr().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
61901ccb4c kernel: rename z_new_thread()
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andy Ross
6c283ca3d0 kernel/thread: Must always initialize is_idle field
Our thread struct gets initialized piecewise in a bunch of locations
(this is sort of a design flaw).  The is_idle field, which was
introduced to identify idle threads in SMP (where there can be more
than one), was correctly set for idle threads but was being left
uninitialized elsewhere, and in a tiny handful of cases was turning up
nonzero.

The case in pipes. was particularly vexsome, as that isn't a thread at
all but one of the "dummy" threads used for timeouts (another design
flaw IMHO).

Get this right everywhere.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Jim Shu
e124670f0b kernel/spinlock: Fix a SMP race condition of SPIN_VALIDATE
z_spin_lock_valid() reads shared variable twice to do two checkings. If
this variable is modified by other CPU between two read accesses, the
checking value is inconsistent. This inconsistency causes the error
that CPU0 can pass the checking when it doesn't hold spinlock because
zeroed-out thread_cpu value is ambiguous with the CPU0 ID.

Fix the inconsistency by only reading shared variable once and using
local variable value to do two checkings.

Fixes #19299.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2019-09-26 16:51:38 -04:00
Nicholas Lowell
5b322d9331 debug: tracing: add sys_trace_thread_name_set
Initial thread creation and tracing information
occurs with empty thread names.  For better tracing information,
we need to a way to get actual thread names if they are set
in order to better track thread names and their IDs.

Signed-off-by: Nicholas Lowell <nlowell@lexmark.com>
2019-09-19 00:37:35 -04:00
Andy Ross
6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Pavlo Hamov
8076c8095b subsystem: kernel_shell: extend thread info
1) Dump time sinse last scheduler call
Could be handy for tickless kernel debug.
Will indicate that no rtc irq is called

2) Dump current timeout of each thread
Could be used to find yout when thread will wake up

3) Dump human friendly thread state

4) Use shell_prin instead shell_fprintf

Signed-off-by: Pavlo Hamov <pavlo_hamov@jabil.com>
2019-09-08 12:39:58 +02:00
Andrew Boie
f281b74c56 userspace: set stack object earlier
Populate thread->stack_obj earlier in the thread initialization
process such that it is set when z_new_thread() is called.

There was nothing specific about its position, or the rest of
the code in that CONFIG_USERSPACE block, so just move it all up..

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
71ce8ceb18 kernel: consolidate error handling code
* z_NanoFatalErrorHandler() is now moved to common kernel code
  and renamed z_fatal_error(). Arches dump arch-specific info
  before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
  and renamed k_sys_fatal_error_handler(). It is now much simpler;
  the default policy is simply to lock interrupts and halt the system.
  If an implementation of this function returns, then the currently
  running thread is aborted.
* New arch-specific APIs introduced:
  - z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
  namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Ioannis Glaropoulos
0e67759985 kernel: fix #endif quard error for k_float_disable
The implementation of z_impl_float_disable was missplaced
inside the #ifdef SPIN_VALIDATE. Fixing it.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-10 13:44:02 -07:00
Andrew Boie
38129ce1a6 kernel: fix CONFIG_THREAD_NAME from user mode.
This mechanism had multiple problems:

- Missing parameter documentation strings.
- Multiple calls to k_thread_name_set() from user
  mode would leak memory, since the copied string was never
  freed
- k_thread_name_get() returns memory to user mode
  with no guarantees on whether user mode can actually
  read it; in the case where the string was in thread
  resource pool memory (which happens when k_thread_name_set()
  is called from user mode) it would never be readable.
- There was no test case coverage for these functions
  from user mode.

To properly fix this, thread objects now have a buffer region
reserved specifically for the thread name. Setting the thread
name copies the string into the buffer. Getting the thread name
with k_thread_name_get() still returns a pointer, but the
system call has been removed. A new API k_thread_name_copy()
is introduced to copy the thread name into a destination buffer,
and a system call has been provided for that instead.

We now have full test case coverge for these APIs in both user
and supervisor mode.

Some of the code has been cleaned up to place system call
handler functions in proximity with their implementations.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-01 16:29:45 -07:00
Anas Nashif
9ab2a56751 cleanup: include/: move misc/printk.h to sys/printk.h
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
6ecadb03ab cleanup: include/: move misc/math_extras.h to sys/math_extras.h
move misc/math_extras.h to sys/math_extras.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
e1e05a2eac cleanup: include/: move atomic.h to sys/atomic.h
move atomic.h to sys/atomic.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
10291a0789 cleanup: include/: move tracing.h to debug/tracing.h
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
68c389c1f8 include: move system timer headers to include/drivers/timer/
Move internal and architecture specific headers from include/drivers to
subfolder for timer:

   include/drivers/timer

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-25 15:27:00 -04:00
Ioannis Glaropoulos
a6cb8b06db kernel: introduce k_float_disable system call
We introduce k_float_disable() system call, to allow threads to
disable floating point context preservation. The system call is
to be used in FP Sharing Registers mode (CONFIG_FP_SHARING=y).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-12 09:17:45 -07:00
Nicolas Pitre
aa9228854f linker generated list: provide an iterator to simplify list access
Given that the section name and boundary simbols can be inferred from
the struct object name, it makes sense to create an iterator that
abstracts away the access details and reduce the possibility for
mistakes.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-06 14:21:32 -07:00
Nicolas Pitre
0b5d9f71f2 thread_cpu: make it 64-bit compatible
This stores a combination of a pointer and a CPU number in the low
2 bits. On 64-bit systems, the pointer part won't fit in an int.
Let's use uintptr_t for this purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-05-30 09:42:23 -04:00
Marc Herbert
4afcc0f8af sanitycheck: CONFIG_TEST_USERSPACE / userspace tag cleanup
- Delete CONFIG_TEST_USERSPACE=n no-ops because it's the default
since commit 7b1ee5cf13

- Some tests have a "userspace" tag pretending to TEST_USERSPACE but
don't and vice versa: fix missing or spurious "userspace" tags in
testcase.yaml files.

Tests have a _spurious_ "userspace" tag when they PASS this command
cause none should pass:

  ./scripts/sanitycheck --tag=userspace -p qemu_x86 \
      --extra-args=CONFIG_TEST_USERSPACE=n  \
      --extra-args=CONFIG_USERSPACE=n | tee userspace.log

All tests run by this command must either fail to build or fail to run
with some userspace related error. Shortcut to look at all test
failures:

 zephyr_failure_logs() {
     awk '/see.*log/ {print $2}' "$@"
 }

Tests _missing_ "userspace" tag FAIL to either build or to run with some
userspace related error when running this:

  ./scripts/sanitycheck --exclude=userspace -p qemu_x86 \
      --extra-args=CONFIG_TEST_USERSPACE=n  \
      --extra-args=CONFIG_USERSPACE=n | tee excludeuserspace.log

Note the detection methods above are not 100% perfect because some
flexible tests like tests/kernel/queue/src/main.c evade them with #ifdef
CONFIG_USERSPACE smarts. Considering they never break, it is purely the
test author's decision to include or not such flexible tests in the
"userspace" subset.

Signed-off-by: Marc Herbert <marc.herbert@intel.com>
2019-05-30 08:45:39 -04:00
Jakob Olesen
c8708d9bf3 misc: Replace uses of __builtin_*_overflow() with <misc/math_extras.h>.
Use the new math_extras functions instead of calling builtins directly.

Change a few local variables to size_t after checking that all uses of
the variable actually expects a size_t.

Signed-off-by: Jakob Olesen <jolesen@fb.com>
2019-05-14 19:53:30 -05:00
Andrew Boie
9f04c7411d kernel: enforce usage of CONFIG_TEST_USERSPACE
If a test tries to create a user thread, and the platform
suppors user mode, and CONFIG_TEST_USERSPACE has not been
enabled, fail an assertion.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-06 14:30:42 -04:00
Andrew Boie
4e5c093e66 kernel: demote K_THREAD_STACK_BUFFER() to private
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.

As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.

The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269

Fixes: #14766

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-05 16:10:02 -04:00
Patrik Flykt
24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Flavio Ceolin
625ac2e79f spinlock: Change function signature to return bool
Functions z_spin_lock_valid and z_spin_unlock_valid are essentially
boolean functions, just change their signature to return a bool instead
of an integer.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Andrew Boie
f4631d5b43 kernel: amend comment in k_thread_create handler
This behavior is expected and not of any concern.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andrew Boie
d0035f9779 kernel: fix stack size check in k_thread_create
The pointer arithmetic used didn't account for ARC
supervisor mode stacks, which are allocated at the
end of the stack object. Use the new macro to know
exactly how much space is reserved.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andy Ross
f37e0c6e4d kernel/spinlock: Fix race in spinlock validation
The k_spin_lock() validation was setting the new owner of the spinlock
BEFORE the actual lock was taken, so it could race against other
processors trying the same thing.  Split the modification step out
into a separate function that can be called after we affirmatively
have the lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross
42ed12a387 kernel/sched: arch/x86_64: Support synchronous k_thread_abort() in SMP
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue.  This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.

On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur.  This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!

On non-SMP architectures this patch changes no code paths at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Patrik Flykt
4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Ioannis Glaropoulos
d69c2f8129 kernel: documentatation for _setup_new_thread()
Add a note in the documentatation of _setup_new_thread()
function stating that the caller is responsible for
providing a size argument that corresponds to the availabe
thread stack area.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-09 11:57:24 -08:00
Andy Ross
e456d0f7dd kernel/thread: Spinlockify
Straightforward spinlock around the global thread state.  Two changes
to the locking strategy were needed:

1. There was a needless recursive lock taken in schedule_new_thread().
This is only ever invoked in circumstances where the lock was already
held, or where there is no need for internal synchronization.

2. The recursive irq_lock() around the loop that spawns the initial
static threads (which happens at the start of main thread execution)
was removed.  Most of the job (i.e. making sure the threads don't run
before the loop is finished) was already duplicated by the sched_lock
it was already taking, and the attempt to promise that all the
timeouts happen on the same tick is already true by construction at
system startup on uniprocessor systems, and not possible to guarantee
at all under SMP (where other CPUs can take that timer interrupt).  We
don't document or test for this feature, so don't try to be fancy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
5aa7460e5c kernel/spinlock: Move validation out of header inlines
The validation checking recently added to spinlocks is useful, but
requires kernel-internals like _current and _current_cpu in a header
context that tends to be needed before those are declared (or where we
don't want them declared), and is causing big header dependency
headaches.

Move it to C code, it's just a validation tool, not a performance
thing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
53cae5f471 kernel: Use _reschedule() instead of _Swap() where possible
These two spots were duplicating logic that is already done inside
_reschedule(), which is the cleaner, less dangerous API.  Use it where
possible when outside the scheduler internals.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
ab46b1b3c5 kernel/sched: CPU mask affinity/pinning API
This adds a simple implementation of SMP CPU affinity to Zephyr.  The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.

Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway).  Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.

The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 21:37:24 -05:00
Flavio Ceolin
6a4a86e413 kernel: Change k_is_in_isr to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin
09e362e0d0 kernel: Change _is_thread_essential to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin
76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00