Commit Graph

1393 Commits

Author SHA1 Message Date
Andrew Boie
2acfcd6b05 userspace: add thread-level permission tracking
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.

Stub functions in userspace.c now implemented.

_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie
5cfa5dc8db kernel: add K_USER flag and _is_thread_user()
Indicates that the thread is configured to run in user mode.
Delete stub function in userspace.c

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie
f564986d2f kernel: add _k_syscall_entry stub
This is the kernel-side landing site for system calls. It's currently
just a stub.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie
1f32d09bd8 kernel: specify arch functions for userspace
Any arches that support userspace will need to implement these
functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie
9f70c7b281 kernel: reorganize CONFIG_USERSPACE
This now depends on a capability Kconfig.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie
26d1eb38e6 stack_sentinel: remove check in _new_thread
We already check the stack sentinel for outgoing thread when we _Swap,
just leverage that.

The thread state check in _check_stack_sentinel now only exits if the
current thread is a dummy thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:32:00 -07:00
Andrew Boie
9a74a081e5 _thread_entry: don't use _current
Thread may be in user mode when it returns and can't look at
_current. Use k_current_get() which will be a system call.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:32:00 -07:00
Andrew Boie
f5adf534e8 kernel: declare interface for checking buffers
This will be used by system call handlers to ensure that any memory
regions passed in from userspace are actually accessible by the calling
thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 08:40:41 -07:00
Andrew Boie
1e06ffc815 zephyr: use k_thread_entry_t everywhere
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.

Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 11:18:22 -07:00
Anas Nashif
8920cf127a cleanup: Move #include directives
Move all #include directives at the very top of the file, before any
code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-09-11 12:41:07 -04:00
Andrew Boie
f2c83acafc kernel: remove k_thread_spawn()
This API was deprecated in 1.8, we can remove for 1.10.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 12:30:51 -04:00
Andrew Boie
8eaff5d6d2 k_thread_abort(): assert if abort essential thread
Previously, this was only done if an essential thread self-exited,
and was a runtime check that generated a kernel panic.

Now if any thread has k_thread_abort() called on it, and that thread
is essential to the system operation, this check is made. It is now
an assertion.

_NANO_ERR_INVALID_TASK_EXIT checks and printouts removed since this
is now an assertion.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:35:16 -07:00
Andrew Boie
7d627c5971 k_thread_create(): allow K_FOREVER delay
It's now possible to instantiate a thread object, but delay its
execution indefinitely. This was already supported with K_THREAD_DEFINE.

A new API, k_thread_start(), now exists to start threads that are in
this state.

The intended use-case is to initialize a thread with K_USER, then grant
it various access permissions, and only then start it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:35:04 -07:00
Andrew Boie
8e51f36bbf kernel: version: no need to store version in RAM
This is a build-time constant, just return it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:34:50 -07:00
Andrew Boie
0a85eaad05 init: initialize dummy thread stack info
Garbage values here could wreak havoc on the initial switch to main
depending on how arch-specific _Swap() manages memory permissions when
switching threads.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:34:41 -07:00
Andrew Boie
945af95f42 kernel: introduce object validation mechanism
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.

The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.

- CONFIG_USERSPACE introduced. We are somewhat far away from having an
  end-to-end implementation, but at least need a Kconfig symbol to
  guard the incoming code with. Formal documentation doesn't exist yet
  either, but will appear later down the road once the implementation is
  mostly finalized.

- In the memory region for RAM, the data section has been moved last,
  past bss and noinit. This ensures that inserting generated tables
  with addresses of kernel objects does not change the addresses of
  those objects (which would make the table invalid)

- The DWARF debug information in the generated ELF binary is parsed to
  fetch the locations of all kernel objects and pass this to gperf to
  create a perfect hash table of their memory addresses.

- The generated gperf code doesn't know that we are exclusively working
  with memory addresses and uses memory inefficently. A post-processing
  script process_gperf.py adjusts the generated code before it is
  compiled to work with pointer values directly and not strings
  containing them.

- _k_object_init() calls inserted into the init functions for the set of
  kernel object types we are going to support so far

Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:33 -07:00
Inaky Perez-Gonzalez
1abd064ce7 boot: move boot banner and delay before SYS_INIT_LEVEL_APPLICATION
Fixes https://github.com/zephyrproject-rtos/zephyr/issues/1280, but
also many other failures, where output was garbled due to this. Other
similarly affected issues are missing first benchmark (context) in
latency benchmark and some net tests.

Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2017-09-07 18:29:05 -05:00
Youvedeep Singh
d787e3c554 timer: k_timer_start should accept 0 as duration parameter.
k_timer_start(timer, duration, period) is API used to
start a timer. Currently duration parameters accepts
only positive number.
But a user may require to do some periodic activity
ASAP and start timer with 0 value. So this patch
allows 0 as minimum value of duration.
In this patch, when duration value is set as 0 then
timer expiration handler is called instead of submiting
this into timeout queue.

Jira: ZEP-2497

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-09-06 10:18:39 -07:00
Youvedeep Singh
76b577e180 tests: benchmark: timing_info: Change API/variable Name.
The API/Variable names in timing_info looks very speicific to
platform (like systick etc), whereas these variabled are used
across platforms (nrf/arm/quark).
So this patch :-
1. changing API/Variable names to generic one.
2. Creating some of Macros whose implimentation is platform
depenent.

Jira: ZEP-2314

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-08-31 14:25:31 -04:00
Luiz Augusto von Dentz
87aa621915 kernel: Use SYS_DLIST_FOR_EACH_CONTAINER whenever possible
SYS_DLIST_FOR_EACH_CONTAINER is preferable over using
SYS_DLIST_FOR_EACH_NODE as that avoid casting directly which assumes the
node field is always at the beginning.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-25 09:08:50 -04:00
Luiz Augusto von Dentz
7d01c5ecb7 poll: Enable multiple threads to use k_poll in the same object
This is necessary in order for k_queue_get to work properly since that
is used with buffer pools which might be used by multiple threads asking
for buffers.

Jira: ZEP-2553

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-25 09:00:46 -04:00
Anas Nashif
83088a235c kernel: init: print boot banner before static threads
The boot banner is being printed after static threads have started, for
example this is visible with tests using ztest.
This puts the banner message before starting any threads.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-24 10:51:04 -04:00
Andy Ross
53c859998d kernel: POSIX thread IPC support
Partial implementation of the IEEE 1003.1 pthread API, including
mutexes and condition variables in their default behaviors, and
pthread barrier objects.  The rwlock and spinlocks abstractions are
not supported in this commit (both only make sense in the presence of
multiple SMP processors).

Note that this is the IPC mechanisms only.  The thread creation API
itself is unsupported: Zephyr threads work differently from pthreads
and don't port cleanly in all cases.  Likewise the "_INITIALIZER"
macros from pthreads don't work cleanly here, and _DECLARE macros have
been provided to statically initialize pthread primitives in a manner
more native to Zephyr

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-08-15 19:42:07 -04:00
Luiz Augusto von Dentz
c1fa82b3c6 work_q: Make k_delayed_work_cancel cancel work already pending
This has been a limitation caused by k_fifo which could only remove
items from the beggining, but with the change to use k_queue in
k_work_q it is now possible to remove items from any position with
use of k_queue_remove.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Luiz Augusto von Dentz
adb581be8e work: Convert usage of k_fifo to k_queue
Make use of k_queue directly since it has a more flexible API.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Luiz Augusto von Dentz
84db641de6 queue: Use k_poll if enabled
This makes use of POLL_EVENT in case k_poll is enabled which is
preferable over wait_q as that allows objects to be removed for the
data_q at any time.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Kumar Gala
bd9a1548ac ztest: reduce MAIN_STACK_SIZE stack to 512 bytes
Save some memory for small memory systems when running ztests.  We have
our own stack in ztest so we should be able to get away reducing down
the main stack.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-08-10 18:24:16 -04:00
Wayne Ren
f8d061faf7 arch: arc: add nested interrupt support
* add nested interrupt support for interrupts
   + use a varibale exc_nest_count to trace nest interrupt and exception
   + regular interrupts can be nested by regular interrupts and fast
interrupts
   + fast interrupt's priority is the highest, cannot be nested
* remove the firq stack and exception stack
   + remove the coressponding kconfig option
   + all interrupts (normal and fast) and exceptions will be handled
     in the same stack (_interrupt stack)
   + the pros are, smaller memory footprint (no firq stack), simpler
     stack management, simpler codes, etc.. The cons are, possible
     10-15 instructions overhead for the case where fast irq nests
     regular irq
* add the case of ARC in test/kernel/gen_isr_table

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-10 12:47:15 -04:00
Youvedeep Singh
f807d4db7e Scheduler: Same priority Preemptive threads should get equal time slice
If there are multiple preemptive threads with same priority, and any
one thread preempts before its time slice expires (due to yields/
semaphore take/queue etc), then next schedules thread is getting
lower time slide than expected.
This patch fixes this issue by accounting time expired when a thread
releases CPU before its time slide expires.

Jira: ZEP-2217/ZEP-2218

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-08-08 08:51:24 -04:00
Anas Nashif
c6ba67fe3f kconfig: move dts Kconfigs to dts/
Those were placed under kernel/ for no good reason.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-03 07:19:29 -05:00
Anas Nashif
11acc391dc kconfig: remove empty and unused kernel.config
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-03 07:19:29 -05:00
Andrew Boie
507852a4ad kernel: introduce opaque data type for stacks
Historically, stacks were just character buffers and could be treated
as such if the user wanted to look inside the stack data, and also
declared as an array of the desired stack size.

This is no longer the case. Certain architectures will create a memory
region much larger to account for MPU/MMU guard pages. Unfortunately,
the kernel interfaces treat both the declared stack, and the valid
stack buffer within it as the same char * data type, even though these
absolutely cannot be used interchangeably.

We introduce an opaque k_thread_stack_t which gets instantiated by
K_THREAD_STACK_DECLARE(), this is no longer treated by the compiler
as a character pointer, even though it really is.

To access the real stack buffer within, the result of
K_THREAD_STACK_BUFFER() can be used, which will return a char * type.

This should catch a bunch of programming mistakes at build time:

- Declaring a character array outside of K_THREAD_STACK_DECLARE() and
  passing it to K_THREAD_CREATE
- Directly examining the stack created by K_THREAD_STACK_DECLARE()
  which is not actually the memory desired and may trigger a CPU
  exception

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-01 16:43:15 -07:00
Andy Ross
4c63af8434 mem_pool: Don't check level_empty() before breaking a block
This test was just wrong.  If the current thread did not race with any
others during the allocation process, then the result will be false
because it was detected so earlier in the function.  If we did race,
then sure: it might be true now if someone snuck in and freed a block.
But so what?  We already have the block we want to break.  The
behavior in the code as written was to early-exit from the break loop,
returning a buffer that was larger than the one requested (though
otherwise benign -- we wouldn't leak, just waste memory).  No idea
what I was thinking.

Thanks to Du Quanwen for the diagnosis.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-07-31 09:14:59 -07:00
Andrew Boie
0fab8a6dc5 x86: page-aligned stacks with guard page
Subsequent patches will set this guard page as unmapped,
triggering a page fault on access. If this is due to
stack overflow, a double fault will be triggered,
which we are now capable of handling with a switch to
a know good stack.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Maureen Helm
7bf0df3aec dts: Generate Kinetis adc settings from device tree
Adds common and Kinetis-specific adc device tree properties, and updates
all Kinetis SoC and board dts files to include adc nodes.

Jira: ZEP-1396

Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
2017-07-19 14:28:08 -05:00
Paul Sokolovsky
b1e7481763 kernel: boot: Fix double prompt definition for CONFIG_BOOT_DELAY
This fixes Kconfig warning:

scripts/kconfig/conf --silentoldconfig Kconfig
zephyr/kernel/Kconfig:209:warning: prompt redefined

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-07-19 09:26:17 +03:00
Inaky Perez-Gonzalez
c51f73f77f boot: add CONFIG_BOOT_DELAY option
Introduce a configurable boot delay option (defaulting to none) that
happens right after printing a boot delay banner, #before calling
main() in kernel/init.c:_main(), before taking timestamps for _main()
and once all the infrastructure is in place. Move also the boot banner
to happen after this delay.

The rationale for this is some boards will boot really fast and print
out some test case output in the serial port before the system that is
monitoring the serial port is able to read from the serial port.

This happens in MCUs whose serial port is embedded in a USB connection
which also is used to power the MCU board. When powering it on by
powering the USB port, there is a time it takes the host system to
detect the USB connection, enumerate the serial port, configure it and
load, start and read from the serial port. At this time, it might have
printed the output of the serial port.

While manually it is possible to press a reset button, on automation
setups this adds a lot of overhead and cabling or modifications to the
MCU that are easier (and cheaper) to overcome with this delay. Other
options (like using a separate serial line) might not be possible or
add a lot of cabling and cost, plus it'd also add extra build
configuration.

Change-Id: I2f4d1ba356de6cefa19b4ef5c9f19f87885d4dfd
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2017-07-18 08:31:45 +03:00
Marti Bolivar
4995820acf dts: i2c: fix build issue by defaulting HAS_DTS_I2C to n
Commit 1bc2fdc70 ("dts: arm: STM32 boards use DT to configure I2C")
added a new Kconfig option, HAS_DTS_I2C, which should be set when the
target supports configuration of I2C peripherals via Device Tree.

Currently, STM32 targets select this. However, the fact that
HAS_DTS_I2C has no default is causing prompting when building Zephyr
on other targets with DTS. To avoid this and allow builds to complete
as usual, have HAS_DTS_I2C default to n.

Signed-off-by: Marti Bolivar <marti.bolivar@linaro.org>
2017-07-12 10:40:28 -05:00
Andrew Boie
65a9d2a94a kernel: make K_.*_INITIALIZER private to kernel
Upcoming memory protection features will be placing some additional
constraints on kernel objects:

- They need to reside in memory owned by the kernel and not the
application
- Certain kernel object validation schemes will require some run-time
initialization of all kernel objects before they can be used.

Per Ben these initializer macros were never intended to be public. It is
not forbidden to use them, but doing so requires care: the memory being
initialized must reside in kernel space, and extra runtime
initialization steps may need to be peformed before they are fully
usable as kernel objects. In particular, kernel subsystems or drivers
whose objects are already in kernel memory may still need to use these
macros if they define kernel objects as members of a larger data
structure.

It is intended that application developers instead use the
K_<object>_DEFINE macros, which will automatically put the object in the
right memory and add them to a section which can be iterated over at
boot to complete initiailization.

There was no K_WORK_DEFINE() macro for creating struct k_work objects,
this is now added.

k_poll_event and k_poll_signal are intended to be instatiated from
application memory and have not been changed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-10 11:44:56 -07:00
Yannis Damigos
1bc2fdc704 dts: arm: STM32 boards use DT to configure I2C
Configure I2C using DT for the following STM32 boards:

disco_l475_iot1
nucleo_f401re
96b_carbon
olimexino_stm32

Signed-off-by: Yannis Damigos <giannis.damigos@gmail.com>
2017-07-07 10:31:34 -05:00
Andrew Boie
bf5228ea56 kernel: add early init routines for app RAM
Applications will have their own BSS and data sections which
will need to be additionally copied.

This covers the common C implementation of these functions.
Arches which implement their own optimized versions will need
to be updated.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-29 07:46:58 -04:00
Andrew Boie
2dc207c987 kernel: add config for app/kernel split
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-29 07:46:58 -04:00
Anas Nashif
397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Adithya Baglody
be1cb961ad tests: benchmark: boot_time: Reading time stamps made arch agnostic
1. Changed _tsc_read() to k_cycles_get_32(). Thus reading the
time stamp will be agnostic of the architecutre used.
2. Changed the variable names from *_tsc to *_time_stamp.

JIRA: ZEP-1426

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-06-16 07:37:37 -05:00
David B. Kinder
9faa5f2033 doc: spelling fixes in Kconfig files
regular spelling check on Kconfig.* files

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-06-12 19:40:51 -04:00
Andrew Boie
dc5d935d12 kernel: introduce stack definition macros
The existing __stack decorator is not flexible enough for upcoming
thread stack memory protection scenarios. Wrap the entire thing in
a declaration macro abstraction instead, which can be implemented
on a per-arch or per-SOC basis.

Issue: ZEP-2185
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-09 18:53:28 -04:00
Andrew Boie
ae1a75b82e stack_sentinel: change cooperative check
One of the stack sentinel policies was to check the sentinel
any time a cooperative context switch is done (i.e, _Swap is
called).

This was done by adding a hook to _check_stack_sentinel in
every arch's __swap function.

This way is cleaner as we just have the hook in one inline
function rather than implemented in several different assembly
dialects.

The check upon interrupt is now made unconditionally rather
than checking if we are calling __swap, since the check now
is only called on cooperative _Swap(). The interrupt is always
serviced first.

Issue: ZEP-2244
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-08 13:49:36 -05:00
Andrew Boie
3989de7e3b kernel: fix short time-slice reset
The kernel tracks time slice usage with the _time_slice_elapsed global.
Every time the timer interrupt goes off and the timer driver calls
_nano_sys_clock_tick_announce() with the elapsed time, this is added to
_time_slice_elapsed. If it exceeds the total time slice, the thread is
moved to the back of the queue for that priority level and
_time_slice_elapsed is reset to zero.

In a non-tickless kernel, this is the only time _time_slice_elapsed is
reset.  If a thread uses up a partial time slice, and then cooperatively
switches to another thread, the next thread will inherit the remaining
time slice, causing it not to be able to run as long as it ought to.

There does exist code to properly reset the elapsed count, but it was
only compiled in a tickless kernel. Now it is built any time
CONFIG_TIMESLICING is enabled.

Issue: ZEP-2107
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-02 14:47:01 -04:00
Maciek Borzecki
81bdee3592 kernel: make _dump_ready_q() static and visible only with CONFIG_KERNEL_DEBUG
Fixes sparse warning:
<snip>/zephyr/kernel/sched.c:368:6: warning: symbol '_dump_ready_q' was not declared. Should it be static?

Change-Id: I156e89f1d74178bbd99cc25e532da544c7ebee60
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Maciek Borzecki
059544d1ae kernel: make sure that CONFIG_OBJECT_TRACING structs are properly ifdef'ed
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?

Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Maciek Borzecki
ed016fa9a0 kernel: make sure that _thread_entry() declaration matches with definition
Fixes sparse warning:
  CHECK   <snip>/zephyr/kernel/thread.c
<snip>/zephyr/kernel/thread.c:184:20: error: symbol '_thread_entry' redeclared with different type (originally declared at <snip>/zephyr/kernel/include/nano_internal.h:43) - different modifiers
  CC      kernel/thread.o

Change-Id: I2223493cdf97c811c661773f8fd430e6c00cbaa0
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Maciek Borzecki
4fef76082a kernel: k_timer_init: use NULL when initializing user data
Fixes sparse warning:
<snip>/zephyr/kernel/timer.c:105:28: warning: Using plain integer as NULL pointer
  CC      kernel/timer.o

Change-Id: Ic17a0b976d25079711f10137667148a321c95dbf
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Andrew Boie
5dcb279df8 debug: add stack sentinel feature
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.

This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie
41c68ece83 kernel: publish offsets to thread stack info
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie
50a533f7a5 kernel: init: mark initial dummy thread
The initial dummy thread context used for the initial __swap to
the main thread at early kernel initialization was not marked as a dummy
thread as it ought to be.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andy Ross
73cb9586ce k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib.  The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout.  Major changes:

Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer.  This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping.  And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).

IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs.  Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).

Deterministic performance: there is no more "defragmentation" step
that must be manually managed.  Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.

Cleaner behavior with odd sizes.  The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available.  If you want precise layout control, you can still
specify the sizes rigorously.  It just doesn't break if you don't.

More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements.  Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax.  This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.

Related changes that had to be rolled into this patch for bisectability:

* The new allocator has a firm minimum block size of 8 bytes (to store
  the dlist_node_t).  It will "work" with smaller requested min_size
  values, but obviously makes no firm promises about layout or how
  many will be available.  Unfortunately many of the tests were
  written with very small 4-byte minimum sizes and to assume exactly
  how many they could allocate.  Bump the sizes to match the allocator
  minimum.

* The mbox and pipes API made use of the internals of k_mem_block and
  had to be ported to the new scheme.  Blocks no longer store a
  backpointer to the pool that allocated them (it's an integer ID in a
  bitfield) , so if you want to "nullify" them you have to use the
  data pointer.

* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
  that it sent through the mailbox.  This worked in the old allocator
  because the memory wouldn't be touched when freed, but now we stuff
  list pointers in there and the bug was exposed.

* Remove test_mpool_options: the options (related to defragmentation
  behavior) tested no longer exist.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-13 14:39:41 -04:00
Andrew Boie
d26cf2dc33 kernel: add k_thread_create() API
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.

This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.

By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.

Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-11 20:24:22 -04:00
Paul Sokolovsky
8cc6f6ddd6 kernel: errno: Use per-thread accessor function compatible with Newlib
Newlib names this function __errno(), so if we want Zephyr to work
with Newlib seamlessly, it's better to just follow Newlib's naming
convention for Zephyr's own minimal libc.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-05-10 20:54:56 -04:00
Paul Sokolovsky
3f50707672 kernel: queue, fifo: Add cancel_wait operation.
Currently, a queue/fifo getter chooses how long to wait for an
element. But there are scenarios when putter would know better,
there should be a way to expire getter's timeout to make it run
again. k_queue_cancel_wait() and k_fifo_cancel_wait() functions
do just that. They cause corresponding *_get() functions to return
with NULL value, as if timeout expired on getter's side (even
K_FOREVER).

This can be used to signal out of band conditions from putter to
getter, e.g. end of processing, error, configuration change, etc.
A specific event would be communicated to getter by other means
(e.g. using existing shared context structures).

Without this call, achieving the same effect would require e.g.
calling k_fifo_put() with a pointer to a special sentinal memory
structure - such structure would need to be allocated somewhere
and somehow, and getter would need to recognize it from a normal
data item. Having cancel_wait() functions offers an elegant
alternative. From this perspective, these calls can be seen as
an equivalent to e.g. k_fifo_put(fifo, NULL), except that such
call won't work in practice.

Change-Id: I47b7f690dc325a80943082bcf5345c41649e7024
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-05-10 09:40:33 -04:00
David B. Kinder
f930480e16 doc: misspellings in Kconfig files
fix misspelling in Kconfig files that would show up in configuration
documentation and screens.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-05-05 19:38:53 -04:00
Adithya Baglody
d03b2496cd test: benchmarking: Timing metrics for the kernel
JIRA: ZEP-1822, ZEP-1823, ZEP-1825

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-05-03 08:46:30 -04:00
Ramesh Thomas
89ffd44dfb kernel: tickless: Add tickless kernel support
Adds event based scheduling logic to the kernel. Updates
management of timeouts, timers, idling etc. based on
time tracked at events rather than periodic ticks. Provides
interfaces for timers to announce and get next timer expiry
based on kernel scheduling decisions involving time slicing
of threads, timeouts and idling. Uses wall time units instead
of ticks in all scheduling activities.

The implementation involves changes in the following areas

1. Management of time in wall units like ms/us instead of ticks
The existing implementation already had an option to configure
number of ticks in a second. The new implementation builds on
top of that feature and provides option to set the size of the
scheduling granurality to mili seconds or micro seconds. This
allows most of the current implementation to be reused. Due to
this re-use and co-existence with tick based kernel, the names
of variables may contain the word "tick". However, in the
tickless kernel implementation, it represents the currently
configured time unit, which would be be mili seconds or
micro seconds. The APIs that take time as a parameter are not
impacted and they continue to pass time in mili seconds.

2. Timers would not be programmed in periodic mode
generating ticks. Instead they would be programmed in one
shot mode to generate events at the time the kernel scheduler
needs to gain control for its scheduling activities like
timers, timeouts, time slicing, idling etc.

3. The scheduler provides interfaces that the timer drivers
use to announce elapsed time and get the next time the scheduler
needs a timer event. It is possible that the scheduler may not
need another timer event, in which case the system would wait
for a non-timer event to wake it up if it is idling.

4. New APIs are defined to be implemented by timer drivers. Also
they need to handler timer events differently. These changes
have been done in the HPET timer driver. In future other timers
that support tickles kernel should implement these APIs as well.
These APIs are to re-program the timer, update and announce
elapsed time.

5. Philosopher and timer_api applications have been enabled to
test tickless kernel. Separate configuration files are created
which define the necessary CONFIG flags. Run these apps using
following command
make pristine && make BOARD=qemu_x86 CONF_FILE=prj_tickless.conf qemu

Jira: ZEP-339 ZEP-1946 ZEP-948
Change-Id: I7d950c31bf1ff929a9066fad42c2f0559a2e5983
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:28 +00:00
Ramesh Thomas
62eea121b3 kernel: tickless: Rename _Swap to allow creation of macro
Future tickless kernel patches would be inserting some
code before call to Swap. To enable this it will create
a mcro named as the current _Swap which would call first
the tickless kernel code and then call the real __swap()

Jira: ZEP-339
Change-Id: Id778bfcee4f88982c958fcf22d7f04deb4bd572f
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:26 +00:00
Andrew Boie
73abd32a7d kernel: expose struct k_thread implementation
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.

On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.

Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-26 16:29:06 +00:00
Vincenzo Frascino
dfed8c4874 kernel: Add stack_info to k_thread
This patck adds the stack information into the k_thread data structure.
The information will be set by when creating a new thread (_new_thread)
and will be used by the scheduling process.

Change-Id: Ibe79fe92a9ef8bce27bf8616d8e0c878508c267d
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
2017-04-25 16:02:38 +00:00
Leandro Pereira
ffe74b45fa kernel: Add thread events to kernel event logger
This adds a new event type to the kernel event logger that tracks
thread-related events: being added to the ready queue, pending a
thread, and exiting a thread.

It's the only event type that contains "subevents" and thus has a
non-void parameter in their respective _sys_k_event_logger_*()
function.  Luckily, as isn't the case with other events (such as IRQs
and thread switching), these functions are called from
platform-agnostic places, so there's no need to worry about changing
the assembly guts.

This is the first patch in a series adding support for better real-time
profiling of Zephyr applications.

Jira: ZEP-1463
Change-Id: I6d63607ba347f7a9cac3d016fef8f5a0a830e267
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-04-25 02:16:36 +00:00
Andrew Boie
cdb94d6425 kernel: add k_panic() and k_oops() APIs
Unlike assertions, these APIs are active at all times. The kernel will
treat these errors in the same way as fatal CPU exceptions. Ultimately,
the policy of what to do with these errors is implemented in
_SysFatalErrorHandler.

If the archtecture supports it, a real CPU exception can be triggered
which will provide a complete register dump and PC value when the
problem occurs. This will provide more helpful information than a fake
exception stack frame (_default_esf) passed to the arch-specific exception
handling code.

Issue: ZEP-843
Change-Id: I8f136905c05bb84772e1c5ed53b8e920d24eb6fd
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-22 10:31:49 -04:00
Andrew Boie
e09a04f068 arm: fix exception handling
For exceptions where we are just going to abort the current thread, we
need to exit handler mode properly so that PendSV can run and perform a
context switch. For ARM architecture this means that the fatal error
handling code path can indeed return if we were 1) in handler mode and
2) only wish to abort the current thread.

Fixes a very long-standing bug where a thread that generates an
exception, and should only abort the thread, instead takes down the
entire system.

Issue: ZEP-2052
Change-Id: Ib356a34a6fda2e0f8aff39c4b3270efceb81e54d
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-22 01:08:07 +00:00
David B. Kinder
61de8f892b spell: Kconfig help typos: /kernel /misc /subsys
Fix misspellings in Kconfig help text

Change-Id: I6eda081c7b6f38287ace8c0a741e65df92d6817b
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-04-22 01:04:56 +00:00
Kumar Gala
96ee45df8d kernel: refactor thread_monitor_init into common code
We do the same thing on all arch's right now for thread_monitor_init so
lets put it in a common place.  This also should fix an issue on xtensa
when thread monitor can be enabled (reference to _nanokernel.threads).

Change-Id: If2f26c1578aa1f18565a530de4880ae7bd5a0da2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala
b8823c4efd kernel: Refactor common _new_thread init code
We do a bit of the same stuff on all the arch's to setup a new thread.
So lets put that code in a common place so we unify it for everyone and
reduce some duplicated code.

Change-Id: Ic04121bfd6846aece16aa7ffd4382bdcdb6136e3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala
5742a508a2 kernel: cleanup use of naked unsigned in _new_thread
There are a few places that we used an naked unsigned type, lets be
explicit and make it 'unsigned int'.

Change-Id: I33fcbdec4a6a1c0b1a2defb9a5844d282d02d80e
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:41 +00:00
Kumar Gala
cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Kumar Gala
789081673f Introduce new sized integer typedefs
This is a start to move away from the C99 {u}int{8,16,32,64}_t types to
Zephyr defined u{8,16,32,64}_t and s{8,16,32,64}_t.  This allows Zephyr
to define the sized types in a consistent manor across all the
architectures we support and not conflict with what various compilers
and libc might do with regards to the C99 types.

We introduce <zephyr/types.h> as part of this and have it include
<stdint.h> for now until we transition all the code away from the C99
types.

We go with u{8,16,32,64}_t and s{8,16,32,64}_t as there are some
existing variables defined u8 & u16 as well as to be consistent with
Zephyr naming conventions.

Jira: ZEP-2051

Change-Id: I451fed0623b029d65866622e478225dfab2c0ca8
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-20 16:07:08 +00:00
Anas Nashif
0b1d41d31d kernel: remove mentions of obsolete CONFIG_NANO_TIMERS
Change-Id: I0a2d6caae6d37b45968e61be8eaf7c4ebb6fdc46
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-20 12:27:36 +00:00
Anas Nashif
8df439b40b kernel: rename nanoArchInit->kernel_arch_init
Change-Id: I094665e583f506cc71185cb6b8630046b2d4b2f8
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 10:59:35 -05:00
Anas Nashif
af6bf1c9ed kernel: remove legacy semaphore groups support
Change-Id: Ia84ed11de3c88e714c275c42556c1dba2bfea3b6
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 10:59:35 -05:00
Anas Nashif
45a7e5d076 kernel: remove legacy.h and MDEF support
Change-Id: I953797f6965354c5b599f4ad91d63901401d2632
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 10:59:35 -05:00
Kumar Gala
34a57db844 Revert "kernel: Convert formatter strings to use PRI defines"
This reverts commit 7b9dc107a8.

We revert this as we intent to move away from {u}int{8,16,32,64}_t types
to our own internal types for sized variables so we shouldn't need the
PRI macros anymore.

Change-Id: I1d9d797fee47ca266867ae65656c150f8fe2adb2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-19 10:50:51 -05:00
Anas Nashif
306e15e0a1 kernel: remove legacy kernel support
Change-Id: Iac1e21677d74f81a93cd29d64cce261676ae78a6
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 15:48:37 +00:00
Anas Nashif
6a0228abaa kernel: thread: remove legacy support
Change-Id: Idee30557237e613a5cfca93e752f05ebd18a186d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 15:48:36 +00:00
Anas Nashif
5e1f709b58 kernel: mailbox: remove legacy support
Change-Id: I218fbec7af4c4e69e4dc41c988f225b558600181
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 15:48:36 +00:00
Kumar Gala
7b9dc107a8 kernel: Convert formatter strings to use PRI defines
To allow for various libc implementations (like newlib) in which the way
various {u}int{8,16,32}_t types are defined vary between both libc
implementations and across architectures we need to utilize the PRI
defines.

Change-Id: Ie884fb67015502288152ecbd64c37961a4f538e4
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-17 11:09:36 -05:00
Anas Nashif
dac15b9b71 samples: shell: fix testcase.ini to be more inclusive
Filter was wrong and sample was not being built on any boards. Exclude
platforms that do not support interrupt based UART drivers.

Jira: ZEP-2014
Change-Id: I84a690e7c93fae52335434830b83086019cfd00d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-11 03:14:25 +00:00
Anas Nashif
6ad0420b26 kernel: remove left-over code from object monitoring
This code is non-functional and is a left over from an old version of
the kernel that does not work and is covered through other new features
in the kernel, for example object tracing.

Jira: ZEP-2013
Change-Id: Id12ad09e2d06186b53cd2f0dd030ac6d37d1229f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-11 03:14:25 +00:00
Andrew Boie
de38141898 kernel: remove deprecated init levels
Change-Id: Id69ec05d9f3417dcfe5ef7ff170681a0a40f3fe7
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-07 17:45:34 +00:00
Amir Kaplan
61b6f5ab7c power_mgmt: Remove deprecated macros and structs
Remove deprecated macros and function and structs that
were deprecated 2 versions ago 1.6 for power management

jira:ZEP-973

Change-Id: I127e482c67e09afea6a2008672661862dbf00c80
Signed-off-by: Amir Kaplan <amir.kaplan@intel.com>
2017-03-31 03:06:17 +00:00
Anas Nashif
8e1dffd192 kernel: disable legacy APIs by default
First step to removing legacy APIs, this will be a wakeup call for this
still using legacy APIs before we completely remove them.

Change-Id: I32db62ff73efaa7eb5ab9ebc4d4fdc4a7c34ae56
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-03-29 16:08:01 -04:00
Florian Vaussard
8fcb780034 kernel: arm: Increase idle stack size to fix corruption by FP_SHARING
When enabling CONFIG_FP_SHARING on ARM, 64 extra bytes are necessary
on the stack of each task in order to save FPU registers S16 to S31.

In the case of the idle stack, the default value of 256 bytes is too
small. As described in ZEP-1470, when the idle task is scheduled out,
floating point registers are saved, which corrupts the stack frame
(especially the saved PC value). When scheduling the idle task, the
restored PC will jump to nowhere, leading to a Usage Fault.

Increase the size of the idle stack by 64 bytes to fix this issue.

JIRA: ZEP-1470

Change-Id: Ib800cd51e5189dda8bf59332db661c21399db3e3
Signed-off-by: Florian Vaussard <florian.vaussard@heig-vd.ch>
2017-03-27 09:05:57 -05:00
Luiz Augusto von Dentz
0dc4dd46d4 lifo: Make use of k_queue as implementation
Once all users of k_lifo migrate to k_queue this should no longer be
needed.

Change-Id: Ib8af40c57bf8feba7b06d6d891cfa57b44faad42
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-27 21:20:53 +00:00
Luiz Augusto von Dentz
e5ed88f328 fifo: Make use of k_queue as implementation
This makes k_fifo functions rely on k_queue and port k_poll to use
k_queue directly.

Once all users of k_fifo migrate to k_queue this should no longer be
needed.

Change-Id: Icf16d580f88d11b2cb89e1abd23ae314f43dbd20
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-27 21:20:52 +00:00
Luiz Augusto von Dentz
a7ddb87501 kernel: Add k_queue API
This unifies k_fifo and k_lifo APIs thus making it more flexible regarding
where the data elements are inserted.

Change-Id: Icd6e2f62fc8b374c8273bb763409e9e22c40f9f8
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-27 21:20:50 +00:00
Anas Nashif
69170173c8 kernel: use k_cycle_get_32 instead of sys_cycle_get_32
Jira: ZEP-1787
Change-Id: I948100e75697dc106a4ba12ce51401673d79fe68
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-02-27 21:10:32 +00:00
Benjamin Walsh
a234fc49b3 kernel/sem: fix coding conventions
Some inconsistent spacing and private types starting with '_'.

Change-Id: I3354b69cc3934717d3b8097cdda98474339c1f32
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:27 +00:00
Benjamin Walsh
2e0bf3a0f8 kernel/sem: fix issue with expired timeouts on group operations
The loop was not tracking the correct next node in the list correctly.

However, it happened that the fix is way more involved than just fixing
that small issue, due to the way that semaphore group timeouts work.

Instead of handling timeouts one-by-one, we have to handle all timeouts
in a semaphore group as one. To do that, we use the fact that the
timeout of the real thread is always found first in the kernel's
timeout_q, and if it has expired, we do not even look at the timeouts of
the dummy threads.

Change-Id: Iadcfd06f33c6b335efa2592b2c01eeb5ca67afde
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:26 +00:00
Benjamin Walsh
6f4bc80901 kernel/timeout: fix handling expired timeouts in reverve queuing order
Queuing in the timeout_q of timeouts expiring on the same tick queue
them in reverse order: as soon as the new timeout finds a timeout
expiring on the same tick or later, it get prepended to that timeout:
this allows exiting the traversal of the timeout as soon as possible,
which is done with interrupts locked, thus reducing interrupt latency.
However, this has the side-effect of handling the timeouts expiring on
the same tick in the reverse order that they are queued.

For example:

    thread_c, prio 4:

        uint32_t uptime = k_uptime_get_32();

        while(uptime == k_uptime_get_32()); /* align on tick */

        k_timer_start(&timer_a, 5, 0);
        k_timer_start(&timer_b, 5, 0);

    thread_a, prio 5:

        k_timer_status_sync(&timer_a);
        printk("thread_a got timer_a\n");

    thread_b, prio 5:

        k_timer_status_sync(&timer_b);
        printk("thread_b got timer_b\n");

One could "reasonably" expect thread_a to run first, since both threads
have the same prio, and timer_a was started before timer_b, thus
inserted first in the timeout_q first (time-wise). However, thread_b
will run before thread_a, since timer_b's timeout is prepended to
timer_a's.

This patch keeps the reversing of the order when adding timeouts in the
timeout_q, thus preserving the same interrupt latency; however, when
dequeuing them and adding them to the expired queue, we now reverse that
order _again_, causing the timeouts to be handled in the expected order.

Change-Id: Id83045f63e2be88809d6089b8ae62034e4e3facb
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:25 +00:00
Benjamin Walsh
5d35dba73d kernel/timeouts: add description of timeouts queued on the same tick
Change-Id: I24ba889e3174b903ccea5309ad45e2b4d1755fe1
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:25 +00:00
Benjamin Walsh
cf93743f50 kernel/sched: refactor _get_first_thread_to_unpend()
Modify _get_first_thread_to_unpend() so that it does not remove the
thread from the wait queue. Rename it to _find_first_thread_to_unpend()
to match the new behaviour.

This will be needed to fix a semaphore group bug.

Change-Id: I1b7531c3beecf3b6a86ecf88a93a02449edd0767
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:24 +00:00
Benjamin Walsh
c1405a7d6b kernel/sched: add _is_thread_dummy()
Rather than explicitely checking the thread state bit.

Change-Id: Ic78427d9847e627a0e91d0147d3b6164450597f6
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:24 +00:00
Benjamin Walsh
c88d0fb82f kernel: fix typo
Change-Id: Ic675015b8830c75d976e21c711dd2a872b5de283
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:22 +00:00
Benjamin Walsh
8d7c274e55 kernel/sched: protect thread sched_lock with compiler barriers
This has not bitten us yet, but it was a ticking timebomb.

This is similar to the issue that was found with irq_lock/irq_unlock
implementations on several architectures. Having a volatile variable is
not the way to force the sched_lock variable to be
incremented/decremented around the accesses to data it protects.
Instead, a compiler barrier must prevent the compiler from reordering
the memory accesses around setting of sched_lock. Needed in the inline
implementations _sched_lock()/_sched_unlock_no_reschedule(), which
resolve to simple decrement/increment of the per-thread sched_lock
variable.

Change-Id: I06f5b3524889f193efe69caa947118404b1be0b5
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:21 +00:00
Mazen NEIFER
5718676aad Xtensa port: Increased idle thread stack size to avoid stack overflow.
Xtensa port uses more stack than others. This was discussed with the team and
we agreed that this can be accepted for the first beta.
We will investigate this later to see how to avoid allocating coproc registers
for the system threads in order to reduce the stack overhead. However this
will not be before the port is considered stable.

Change-Id: Icd5b2b0ab68d0906b5408f35f081b100acabc010
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
2017-02-13 08:04:27 -08:00
Andy Gross
bb063164aa dts: Add support for Device Tree
This patch adds support for using device tree configuration files for
configuring ARM platforms.

In this patch, only the FLASH_SIZE, SRAM_SIZE, NUM_IRQS, and
NUM_IRQ_PRIO_BITS were removed from the Kconfig options.  A minimal set
of options were removed so that it would be easier to work through the
plumbing of the build system.

It should be noted that the host system must provide access to the
device tree compiler (DTC).  The DTC can usually be installed on host
systems through distribution packages or by downloading and compiling
from https://git.kernel.org/pub/scm/utils/dtc/dtc.git

This patch also requires the Python yaml package.

This change implements parts of each of the following Jira:
ZEP-1304
ZEP-1305
ZEP-1306
ZEP-1307
ZEP-1589

Change-Id: If1403801e19d9d85031401b55308935dadf8c9d8
Signed-off-by: Andy Gross <andy.gross@linaro.org>
2017-02-10 18:13:58 +00:00
Luiz Augusto von Dentz
41921dd5b9 kernel: Use SYS_DLIST_FOR_EACH_CONTAINER
Change-Id: I4cbb12af487217cfcb78969ec88a8e4c06eca27f
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-10 16:16:14 +00:00
Benjamin Walsh
3c1ab5d338 kernel/poll: fix signal.signaled not being set when k_poll() waits
Change-Id: I73d906e4cb4a3d359e1ec193db933a95b4739611
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-09 23:54:27 +00:00
Benjamin Walsh
2014ff162e kernel/poll: fix registrations that were not always cleared
Poll events were getting registered even when polling conditions had
already been met, but events with conditions met did not register and
did not increment the number of events registered. This caused a
possible discrepancy between the number of events registered and the
position of the last event registered in the events array.

As soon as one event condition is met, the next ones in the array should
not get registered even if their condition is not met. This is what the
code does now.

Change-Id: Ibcc3b135ec9d3cf463beb9da3f641fec962b34bf
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-09 23:54:26 +00:00
Benjamin Walsh
47503e30b2 kernel/poll: refactor is_polling()
It's always called for the current thread.

Change-Id: I6588ae27505e961df5cf82463ca9be90a539685b
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-09 23:54:25 +00:00
Ramesh Thomas
444ecafeee kernel: Remove redundant TICKLESS_IDLE_SUPPORTED option
This flag is no longer necessary and TICKLESS_IDLE will be
enabled by default if SYS_POWER_MANAGEMENT is enabled.

Jira: ZEP-1325
Change-Id: Ic6cd4b8dc0a17c6a413cabf6509b215a4558318d
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-02-08 13:02:34 +00:00
Mazen NEIFER
e2bbad9600 kernel: init: use C implementation for STACK_CANARY_INIT
Due to a limitation on XCC, the inline assembly does not
produce the expected instructions. This results in a wrong
code sequence. On the other hand, plain C code works well.
The note about compilers seems to not be an issue on any of
our currently supported compilers.

Change-Id: I9d2ab0fbf8a48d9dad51da3fd54453f205516d74
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-02-07 22:18:21 +00:00
Anas Nashif
4fb12ae988 kernel: k_timer_stop: remove assert when called from an ISR
Change-Id: I596e0323a7aafc9d7f3834a8d1b655ad2540d4ef
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-02-04 19:25:11 +00:00
Benjamin Walsh
a304f16773 kernel/poll: add k_poll_signal_init() runtime init
Change-Id: Id5a27f7d25e26a1a71ef87000d35a18777210c19
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-03 13:54:01 +00:00
Benjamin Walsh
b017986347 kernel/poll: add missing poll_event runtime init
It was in the static initializers, but was missing from the object
runtime init functions.

Change-Id: I10d519760eabdbe640a19cc5cfa9241c1356b070
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-03 13:54:00 +00:00
Benjamin Walsh
969d4a7ff1 kernel/poll: add user tag to struct k_poll_event
This will allow users to install a way of finding out what the event and
the objects are used for without looking at the object itself, or to
tag a bunch of objects that belong together.

The runtime init function _does not_ take a tag so that there is no
runtime hit if not needed. The static initializer macro _does_ take the
tag, so that it does not have to be initialized at runtime if needed,
and thus avoids a runtime hit.

Change-Id: I89a36c6f969ff952f9d1673b1bb5136e407535c6
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-03 13:53:59 +00:00
Jithu Joseph
d33c42a19d kernel: thread: Fix legacy symbol mixup in fp path
When CONFIG_FP_SHARING is enabled without CONFIG_LEGACY
thread.c was referencing symbols like K_TASK_GROUP_FPU
which are defined in legacy.h

Change-Id: I4bb1723f91c3e3586c5d1bf05cf23a1c0d3d5aac
Signed-off-by: Jithu Joseph <jithu.joseph@intel.com>
2017-02-03 03:20:31 +00:00
Johan Hedberg
1387effa86 kernel: Fix k_poll support for k_fifo_put_list
With the recently added k_poll feature, k_fifo_put_list was forgotten
about. Add the necessary code to wake up a k_poll call when
k_fifo_put_list is called.

Change-Id: Ib9baef5ee2bd00620e2eea5afdd81accc4518bd5
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2017-02-02 15:12:36 +00:00
Benjamin Walsh
acc68c1e59 kernel: add k_poll() API
k_poll() is similar to the POSIX poll() API in spirit in that it allows
a single thread to monitor multiple events without actively polling
them, but rather pending for one or more to become ready. Such events
can be a direct event, or kernel objects (currently only semaphores and
fifos).

When a kernel object being polled on is ready, it is not "given" to the
poller: the poller must then acquire it via the regular API for the
object (e.g. k_sem_take()). Only one thread can poll on a particular
object at one time. These restrictions mean that k_poll() is most
effective when a single thread monitors multiple events that are not
subject for contention. For example, being the sole reader on multiple
fifos, or the only thread being signalled by multiple semaphores, or a
combination of both.

Change-Id: I7035a9baf4aa016fb87afc5f5c0f5f8cb216480f
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:30:00 +00:00
Benjamin Walsh
fcdb0fd6ea kernel: add _WAIT_Q_INIT()
Dissociate wait queue initialization from doubly-linked lists if the
underlying implementation is to be abstracted.

Change-Id: Id7544c6ac506643437f9c4f0ae97e7eecab8d06d
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:30:00 +00:00
Benjamin Walsh
0de9487351 kernel: add _THREAD_POLLING thread state
Will be needed for k_poll() API.

Change-Id: I0ebe4be5a9c56df2ebb8496dc49c894e982e6008
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:29:59 +00:00
Benjamin Walsh
0a49ba38b8 kernel: add _is_thread_state_set()
Change-Id: I2b6a51c23997afeb5252a3632172156ba96252ce
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:29:58 +00:00
Benjamin Walsh
ed240f2796 kernel/arch: streamline thread user options
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.

Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.

Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:50 +00:00
Benjamin Walsh
867f8ee371 kernel: move K_ESSENTIAL from thread_state to execution_flags
The execution_flags will store the user-facing states of a thread.

This also fixes a bug where K_ESSENTIAL was already assigned to
execution_flags via the options field of
k_thread_spawn()/K_THREAD_DEFINE().

Change-Id: I91ad7a62b5d180e09eead8985ff519809959ecf2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
Benjamin Walsh
a8978aba8f kernel: rename thread states symbols
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.

Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
Benjamin Walsh
3f3f4d94d5 kernel: remove K_STATIC
Unused.

Reuse bit for K_FP_REGS to keep the used bits the lowest possible.

Change-Id: I5998801ef34156271d4f66d1948a05e0b2ce58f7
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:48 +00:00
Benjamin Walsh
dfa7ce5c94 kernel: include kernel.h in kernel_structs.h in asm files
This will be needed for some thread user options that will move to
kernel.h since they are part of the user API.

Change-Id: I46e302b6cafcdddbad3458134b98feb5b8d45d9b
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:48 +00:00
Benjamin Walsh
a5d8461d74 kernel: move volatile from k_thread.prio to k_thread.sched_locked
When prio and sched_locked were moved into a struct together to create a
union with the combined preempt field, the volatile qualifier moved from
sched_locked to prio by mistake.

Change-Id: I5a8e01324f14e77e3d7162c12515471826023633
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:47 +00:00
David B. Kinder
ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Anas Nashif
2bffa3067b kernel: use __ticks_to_ms directly
_ticks_to_ms is defined in legacy only.

Change-Id: I543d88b6edea1832a3020161d8b87dad5111de2c
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-17 19:03:39 +00:00
Anas Nashif
9bb3934273 kernel: mailbox: legacy calls depend on CONFIG_LEGACY_KERNEL
Change-Id: I9a51af1731e64e963f368dd649fcc2cebffabd2f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-17 19:03:38 +00:00
Anas Nashif
e38f5df522 kernel: make legacy calls depends on CONFIG_LEGACY_KERNEL
Change-Id: Id1ba4bf7cd1fafca01115ebf2913d9f3729bbff3
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-17 19:03:37 +00:00
Anas Nashif
2b4c1727ce kernel: build legacy timer only conditionally
Make it depend on CONFIG_LEGACY_KERNEL being enabled.

Change-Id: Id5d3cd35a52d38bf7476ea8e51b71e2c687f0923
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-17 19:03:36 +00:00
Benjamin Walsh
2f280416e6 kernel: fix total number of coop prios in coop-only mode
The idle priority was not accounted for.

With this change, the philosophers demo runs in coop-only mode.

Change-Id: I23db33687bcf3b2107d5fc07977143730f62e476
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-17 12:17:27 +00:00
Benjamin Walsh
4bfa0055b7 kernel/mutex: prevent priority inheritance from lowering owner's prio
If the system's priority inheritance priority ceiling is not the same as
the highest priority in the system, it was possible for a thread owning
the mutex to get its priority lowered instead of left unchanged.

Change-Id: Ic06a1c4a66322c2949b2ba2f53efa03200fb1fc1
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-17 12:17:26 +00:00
Benjamin Walsh
e669559010 kernel: fix main/work_q prios in coop/preempt-only modes
-1 is reserved for the idle thread in coop-only mode and -1 does not
exist as a priority in preempt-only mode.

With this change, the philosophers demo runs in preempt-only mode.

Change-Id: Id15a6eafc7582966deaf0db9ed6960b5da74be33
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-17 12:17:26 +00:00
Benjamin Walsh
e4e98f9d7b kernel: add user data API to timers
Similar to what was available with nano timers in the original kernel,
allow a user to associate opaque data with a timer.

Fix for ZEP-1558.

Change-Id: Ib8cf998b47988da27eba4ee5cd2658f90366b1e4
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-14 13:06:00 +00:00
Jean-Paul Etienne
c76abeeae5 kernel: updated default IDLE_STACK_SIZE to 512 for RISCV32
Default 256 bytes stack size for idle task is not enough, as
stack grows/shrinks by a multiple of 16-bytes in the
RISC-V architecture.

Increase it to 512 bytes for RISCV32 architecture

Change-Id: I8321c48e4c1a877b252ba5561f3cbdd1fe475fc7
Signed-off-by: Jean-Paul Etienne <fractalclone@gmail.com>
2017-01-13 19:54:35 +00:00
Jean-Paul Etienne
4c6ab7cfcd unified: added _MOVE_INSTR for RISCV32 architecture
added _MOVE_INSTR for RISCV32 architecture

The store instruction has a different syntax in RISC-V,
compared to the other architectures. Hence, for each
architecture, specify the entire load instruction within
the _MOVE_INSTR variable.

Change-Id: Iedc421e73411876abd8b698f7d4b46081b473d79
Signed-off-by: Jean-Paul Etienne <fractalclone@gmail.com>
2017-01-13 19:53:57 +00:00
Kumar Gala
a3629e838c kernel: have boot banner depend on console existing
For some of our samples/test we disable all console support, yet enable
BOOT_BANNER in tests/include/test.config, this can generate warnings
like:

warning: (BOOT_BANNER && BLUETOOTH_DEBUG_LOG && BLUETOOTH_DEBUG_MONITOR)
selects PRINTK which has unmet direct dependencies (CONSOLE_HAS_DRIVER)

So having BOOT_BANNER depend on CONSOLE_HAS_DRIVER cleans things up.

Change-Id: Ia6a6348fc08b0808ea6eaedb8c8833507f82c702
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-12 01:00:14 +00:00
Carles Cufi
cb0cf9f5f4 kernel: profiling: Expose an API call to analyze call stacks
The main, idle, interrupt and workqueue call stack definitions are not available
to applications to call stack_analyze() on, but they often require to be
measured empirically to tune their sizes in particular applications and
use cases.
This exposes a new k_call_stacks_analyze() API call that allows the
application to measure the used call stack space for the 4
kernel-defined call stacks.
Additionally for the ARC architecture the FIRQ stack is also profiled.

Change-id: I0cde149c7366cb6c4bbe8f9b0ab1cc5b56a36ed9
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2017-01-11 15:19:18 +00:00
Benjamin Walsh
168695c7ef kernel/arch: inspect prio/sched_locked together for preemptibility
These two fields in the thread structure control the preemptibility of a
thread.

sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.

By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.

Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:25 +00:00
Benjamin Walsh
f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Anas Nashif
70a2e138b7 kernel: add LEGACY_KERNEL option
Add global option for legacy configurations and enable by default for
backward compatibility. Disable option on tests and keep it on legacy
samples and tests.

Jira: ZEP-964
Change-Id: I0831e2aa74d438b1ac74eb762186cb220a504beb
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-09 19:42:13 +00:00
Anas Nashif
f6e039062a kernel: remove dependency on CONFIG_NANO_TIMERS/TIMEOUTS
Remove legacy option and use SYS_CLOCK_EXISTS where appropriate.

Change-Id: I3d524ea2776e638683f0196c0cc342359d5d810f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-08 18:09:52 +00:00
Benjamin Walsh
66b99f1486 kernel: add _timeout_q dump before and after adding timeout
Kernel debugging aid.

Change-Id: I852ba2f626f133d943be2ecac41354fecca478d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:27 +00:00
Benjamin Walsh
99eef25815 kernel: do not use sys_dlist_insert_at() in _add_timeout()
Similar to _pend_queue, it's more efficient to do the logic inline.

Change-Id: I68ac4fbc26c97b6ec9322caef98504ff6ccc8727
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:26 +00:00
Benjamin Walsh
b8c2160a2b kernel: do not use sys_dlist_insert_at() in _pend_thread()
It's calling a function on every iteration, it's more efficient to just
do the logic inline.

Change-Id: I166e377d4ffb3056749fd625cb789173030904ac
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:26 +00:00
Benjamin Walsh
d779f3d240 kernel/arch: streamline thread flag bits used
Use least significant bits for common flags and high bits for
arch-specific ones.

Change-Id: I982719de4a24d3588c19a0d30bbe7a27d9a99f13
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh
e6a69cae54 kernel/arch: reverse polarity on sched_locked
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.

Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh
04ed860c68 kernel: make _thread.sched_locked a non-atomic operator variable
Not needed, since only the thread itself can modifiy its own
sched_locked count.

Change-Id: I3d3d8be548d2b24ca14f51637cc58bda66f8b9ee
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:23 +00:00
Anas Nashif
fad7e2dd8d logging: move event_logger to subsys/logging
Jira: ZEP-1337
Change-Id: If1690e19a882cf53caaa3418ccabeb49c783f63d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-25 14:34:43 -05:00
Benjamin Walsh
6209218f40 kernel: optimize ms-to-ticks for certain tick frequencies
Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.

- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
  shift since they are power-of-two factors of 1000.

In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.

These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).

Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.

Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:07 +00:00
Benjamin Walsh
eec37e6752 kernel: add flag that tells the system is handling timeouts
This limits the execution contexts that will go over the loop in
_unpend_first_thread() to only ISRs of very high priority that are
preempting the system clock timer ISR, and only during the time it is
handling timeouts.

Change-Id: Iaf0500d28a2de5e077c9cf9861a5a70244127d58
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:05 +00:00
Anas Nashif
dc3d73bf58 kernel: fix all nanokernel usage in comments
Also include kernel.h instead of nanokernel.h

Change-Id: I65dc5e31b5409b809397296817e2d5e7adf28892
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-21 18:45:03 +00:00
Anas Nashif
3d8e86c12c drivers: eliminate nano/micro kernel usage
Jira: ZEP-1415

Change-Id: I4a009ff57edb799750175aef574a865589f96c14
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-21 18:45:02 +00:00
Anas Nashif
2f203c2f92 tracing: rename CONFIG_DEBUG_TRACING_KERNEL_OBJECTS
Use a short name for this option CONFIG_OBJECT_TRACING.

Change-Id: Id27de7ef9ca299492b6b7d2324d9f5bcf8059a31
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif
d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif
9463dc0b8f kernel: merge kernel Kconfigs into one
Reorganise and cleanup Kernel Kconfig options and group options of the
same area under Menus to ease readability and to have a better structure
when using menuconfig.

Change-Id: Ic6b39730297861367abd345ede35e41c046c099d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:43 +00:00
Anas Nashif
40b7183326 kernel: fixed description of THREAD_CUSTOM_DATA
Change-Id: I63ebfc6b7cf869d7a00ccbe4f20eca8060edaf43
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:42 +00:00
Anas Nashif
569f0b4105 debug: move debug features from misc to subsys/debug
Change-Id: I446be0202325cf3cead7ce3024ca2047e3f7660d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:40 +00:00
Anas Nashif
ed116ace6d kernel: kconfig: move power management options out
Change-Id: I5d7068ca7a5793bb3499f2bf2dc1abc4e337313e
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:37 +00:00
Anas Nashif
666afe5923 kernel: kconfig: move event logger options into file
Change-Id: I1e80375df583c5a5b6f04b216b54ed5b786e4655
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:37 +00:00
Anas Nashif
bd10845996 kernel: kconfig: replace task/fiber with threads
Change-Id: I6d44cad8b2cf195137f04808167614390ee2ec55
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:36 +00:00
Anas Nashif
59a7de8ddf kernel: Isolate logger options
Move those into a separate Kconfig file and include them instead.

Change-Id: Ifa25d6ec92937080ad5970af7ca5c3f07ddec961
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:35 +00:00
Anas Nashif
cfbe9b05a1 kernel: rename NANOKERNEL_TICKLESS_IDLE_SUPPORTED
rename NANOKERNEL_TICKLESS_IDLE_SUPPORTED to
TICKLESS_IDLE_SUPPORTED and remove nanokernel occurances in Kconfig
files.

Make TICKLESS_IDLE depend on hardware that supports it.

Change-Id: I6a2e4fb0f7cf4b45475b48e71823ea089ee98759
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:35 +00:00
Anas Nashif
cb888e6805 kernel: remove nano/micro wording and usage
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.

Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:03 +00:00
Benjamin Walsh
1f2a5791bc kernel: add missing ___kernel_t_arch_OFFSET
Change-Id: I9913a1734f00dfb24f41214942230c4a127aa1a8
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-19 19:10:17 +00:00
Benjamin Walsh
cef368f578 kernel/arch: rename ARCH_HAS_NANO_FIBER_ABORT to ARCH_HAS_THREAD_ABORT
And also remove now obsolete ARCH_HAS_TASK_ABORT.

ARC does not need the options either.

Change-Id: Ie52d63178a367ce12b911dacfe2d389f4f75ed2d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-17 22:44:40 +00:00
Benjamin Walsh
096d8e9af5 kernel: fix warnings when CONFIG_MULTITHREADING=n
Change-Id: I57c6a225c3eece9e2d4942bacdfcb097f2edaf42
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-17 16:54:45 +00:00
Benjamin Walsh
e0ca2109a6 kernel: initialize system work queue after kernel is up
The system work queue spawns a coop thread to hanlde the work items. If
it is spawned before the kernel is up and the initialization dummy
thread's priority is lower, there will be a context switch into the
system work queue's thread at that time, before the kernel is ready to
handle this.

Change-Id: I879659ab58231c5a5cfaa34f2f65c2eccab99142
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-17 16:24:34 +00:00
Anas Nashif
f11fe9eca5 kernel: set CONFIG_MDEF by default
Legacy applications still need that, otherwise kernel objects are not
configured correctly. Will be removed later.

Change-Id: I22df10e4adcc11f035f9813bea8c93dd1a560a1d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-17 10:25:20 -05:00
Benjamin Walsh
b12a8e0914 kernel: introduce single-threaded kernel
For very constrained systems, like bootloaders.

Only the main thread is available, so a main() function must be
provided. Kernel objects where pending is in play will not behave as
expected, since the main thread cannot pend, it being the only thread in
the system. Usage of objects should be limited to using K_NO_WAIT as the
timeout parameter, effectively polling on the object.

Change-Id: Iae0261daa98bff388dc482797cde69f94e2e95cc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:39 -05:00
Benjamin Walsh
a2098393fa kernel: fix dummy init thread prio in preempt-only configurations
A thread cannot have a coop priority in this case. It turns out a
priority is not needed when a thread is not inserted in the ready queue,
which is the case with the dummy thread.

The comment was also out-of-date, since it referred to a nanokernel
concept.

Change-Id: Id117501164bd72383d53f3df13030cf95dadc38b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
8e4a534ea1 kernel: enable and optimize coop-only configurations
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.

Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
c8cecca192 kernel: add CONFIG_PREEMPT_ENABLED and CONFIG_COOP_ENABLED
Enabled when CONFIG_NUM_PREEMPT_PRIORITIES != 0 and
CONFIG_NUM_COOP_PRIORITIES != 0 repectively.

Change-Id: Ic791518429d9d8ad8127f67087f7927bffeabe44
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
c3a2bbba16 kernel: add k_cpu_idle/k_cpu_atomic_idle()
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.

The kernel internals now make use of these APIs instead of the legacy
ones.

Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
b889fa8b20 kernel: enhance realtime-ness when handling timeouts
The numbers of timeouts that expire on a given tick is arbitrary. When
handling them, interrupts were locked, which prevented higher-priority
interrupts from preempting the system clock timer handler.

Instead of looping on the list of timeouts, which needs interrupts being
locked since it can be manipulated at interrupt level, timeouts are
dequeued one by one, unlocking interrupts between each, and put on a
local 'expired' queue that is drained subsequently, with interrupts
unlocked. This scheme uses the fact that no timeout can be prepended
onto the timeout queue while expired timeouts are getting removed from
it, since adding a timeout of 0 is prohibited.

Timer handlers now run with interrupts unlocked: the previous behaviour
added potentially horrible non-determinism to the handling of timeouts.

Change-Id: I709085134029ea2ad73e167dc915b956114e14c2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2016-12-15 16:17:22 -05:00
Benjamin Walsh
5596f78c08 kernel: fix typo
Change-Id: I5a9e53100dcac9b78cae655c3c68444357832094
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Benjamin Walsh
6ca6c28dd3 kernel/timers: move tick computation out of irq_lock block
These tick computation can take a significant amount of time, and there
is no reason to do them with interrupts locked.

Change-Id: I2d8803ec6025b827e9450fa493084bbf8be98bad
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Benjamin Walsh
d211a52fc0 kernel: add defines for delta_ticks_from_prev special values
Use _INACTIVE instead of hardcoding -1.

_EXPIRED is defined as -2 and will be used for an improvement so that
interrupts are not locked for a non-deterministic amount of time while
handling expired timeouts.

_abort_timeout/_abort_thread_timeout return _INACTIVE instead of -1 if
the timeout has already been disabled.

Change-Id: If99226ff316a62c27b2a2e4e874388c3c44a8aeb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Benjamin Walsh
88b3691415 kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.

However, this caused two problems:

1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.

2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.

To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.

On the ARM FRDM K64F board, this improvement is seen:

Before:

1- Measure time to switch from ISR back to interrupted task

   switching time is 215 tcs = 1791 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 315 tcs = 2625 nsec

After:

1- Measure time to switch from ISR back to interrupted task

   switching time is 130 tcs = 1083 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 225 tcs = 1875 nsec

These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.

Fixes ZEP-1401.

Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Anas Nashif
418058a123 kernel: remove NANOKERNEL and MICROKERNEL configs
Those are legacy and not needed anymore.

Change-Id: I8113114fd60880b3f538612db7702f6129af0a06
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-14 13:45:52 +00:00
Anas Nashif
0859df1eca kernel: disable MDEF by default
Disable MDEF option and set it only in legacy projects.

Change-Id: I2e1f011eb1f876af929140e36f71f0efb5e955c1
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-12 20:25:07 +00:00
Flavio Santes
b80db0a41b kernel: Add ARG_UNUSED macro to avoid compiler warnings
The ARG_UNUSED macro is added to avoid compiler warnings.

Change-Id: Ie9b72c94191318c1d667d7929eb029098c62e993
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
2016-12-12 20:02:31 +00:00
Flavio Santes
5349af8702 kernel/mem_slab: Use the right data-type
Use uint32_t for counters instead of int to avoid compiler warnings.

Change-Id: Ie96dfaca650b5f91562c0740c18610fc40968be6
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
2016-12-12 20:01:31 +00:00
Flavio Santes
380ee05a58 kernel/mem_pool: Use the right data-type
mem_pool structures use uint32_t for counters and size_t
to specify sizes, however some routines in mem_pool.c
make use of int for similar purposes. This commit fixes
that situation by updating some variables to match
mem_pool data types.

Change-Id: I0aa01c27e512d06d40432e8091ed8fd9d959970c
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
2016-12-12 20:01:31 +00:00
Johan Hedberg
f99ad3f0e2 kernel: Refactor remaining time evaluation for timeouts
Factor out the code for evaluating the remaining time for _timeout
structs so that it can also be used for other objects besides k_timer
structs (like k_delayed_work, coming in a subsequent patch).

Change-Id: I243a7b29fb2831f06e95086a31f0d3a6c37dad67
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2016-12-12 18:55:40 +00:00
Marcus Shawcroft
a715194d43 random: Rewrite sys_rand32_init() with SYS_INIT()
Use the SYS_INIT() mechanism to invoke the sys_rand32_init() function
in random drivers that require an initializer.  Remove all empty
sys_rand32_init() instances.

The existing explicit sys_rand32_init() function runs immediately after
PRE_KERNEL_2 before stack canaries are initialized.  In order to get
equivalent behaviour with sys_rand32_init() we set SYS_INIT() to
initialize the random drivers at the lowest priority of PRE_KERNEL_2.

Change-Id: I4521e44daac806bc4eef01ce7fdf2ba5367e0587
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
2016-12-11 11:18:18 +00:00
Carles Cufi
9849df8c80 kernel: Disable interrupts after tick calculation in k_sleep()
To guarantee that the compiler does not reorder the execution of
irq_lock() with preceding operations, a volatile qualifier is
placed before the declaration of the ticks variable, which then
ensures that irq_lock() is executed after the tick calculation but
before accessing the ready and timeout queues.
Without the volatile keyword interrupts will be disabled during the
calculation of the ticks, which increases interrupt latency
significantly.

Change-Id: I2da82a1282e344f3b8d69e9457b36a4cb1d9ec18
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2016-12-08 16:37:51 +00:00
Mahavir Jain
acea24138a kernel: replace .BSS and .DATA setup with standard library calls
Use standard library calls like memset/memcpy for setting up BSS and DATA
sections during system initialization, this helps to take advantage of
architecture specific optimizations from standard library.

Change-Id: Ia72b42aa65b44d1df7c22dd1fbc39a44fa001be9
Signed-off-by: Mahavir Jain <mjain@marvell.com>
2016-12-02 17:44:06 +00:00
Mahavir Jain
a636604cd5 kernel: include kernel version in boot banner
Make boot banner more informative by adding kernel version string

Change-Id: I21865ea3a001fba2c30fe58e6e052aae59fef3e2
Signed-off-by: Mahavir Jain <mjain@marvell.com>
2016-12-02 17:44:05 +00:00
Mahavir Jain
45f2ef653d work_q: delayed work cancel returns incorrect status
If delayed work is already submitted or completed, then subsequent
cancel should return -EINVAL as return status.

Fixes ZEP-1373.

Change-Id: I16bbacca7e31a5a5d8e5a89e729d70302ada6223
Signed-off-by: Mahavir Jain <mjain@marvell.com>
2016-12-02 12:50:51 +00:00
Benjamin Walsh
f421ec23ad kernel: fix race condition when spawning a thread with a delay
Interrupt must be locked before inserting a timeout in the timeout
queue.

Change-Id: Iab0bf01f393e66a6403d2f85e899dbf737da4afc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-30 23:48:06 +00:00
Benjamin Walsh
a36e0cf651 kernel: remove K_TIMING thread flag
The fact that a thread is timing out was tracked via two flags: the
K_TIMING thread flag bit, and the thread's timeout's
delta_ticks_from_prev being -1 or not. This duplication could
potentially cause discrepancies if the two flags got out-of-sync, and
there was no benfits to having both.

Since timeouts that are not parts of a thread rely on the value of
delta_ticks_from_prev, standardize on it.

Since the K_TIMING bit is removed from the thread's flags, K_READY would
not reflect the reality anymore. It is removed and replaced by
_is_thread_prevented_froM_running(), which looks at the state flags that
are relevant. A thread that is ready now is not prevented from running
and does not have an active timeout.

Change-Id: I902ef9fb7801b00626df491f5108971817750daa
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-26 14:04:18 +00:00
Benjamin Walsh
b2974a666d kernel/arch: move common thread.flags definitions to common file
Also remove NO_METRIC, which is not referenced anywhere anymore.

Change-Id: Ieaedf075af070a13aa3d975fee9b6b332203bfec
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-26 14:04:18 +00:00
Benjamin Walsh
516e79c8da kernel: disable INIT_STACKS by default
Now that we're out of the unified kernel development phase, turn off
that debugging option.

Change-Id: I89decbdf445b1ba111a829edf2c8a36846419586
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-24 16:37:01 +00:00
Benjamin Walsh
04c542d9d0 kernel/mbox: add missing dummy thread timeout init
It was possible for a dummy thread to be not timing, but not having
timeout.delta_ticks_from_prev not be -1 at the same time, which is a big
no-no.

Use _init_thread_base() to do a full initialization of the dummy thread.

Fixes ZEP-1312.

Change-Id: I16a2373be3329c142cf26f5dca6bfdbe6014ac5e
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:27:43 +00:00
Benjamin Walsh
069fd3624e kernel: streamline initialization of _thread_base and timeouts
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.

Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:27:42 +00:00
Vinayak Chettimada
09ba96d856 kernel: declare main and idle stack as globals
Renamed main_stack and idle_stack, to _main_stack and
_idle_stack, respectively, and made them globals. This does
not affect performance. They are still kept kernel private
symbols and not part of kernel API.

This will allow these symbols to be referenced in calls to
stack_analyse misc functions to profile stack usage in
applications.

Change-id: Id6b746c5cfda617c26901c6e62c3e17114471f57
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
2016-11-23 00:24:00 +00:00
Benjamin Walsh
296a234ddb kernel: add support for switching to main thread without _Swap()
It's possible that an architecture needs a custom way of switching to
the main() task, rather than using _Swap() with a dummy thread.

Change-Id: I14e9bc67be35174ff16209bcea27b18a069ff754
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:23:58 +00:00
Benjamin Walsh
8fcc7f69da kernel/arch: remove unused uk_task_ptr parameter from _new_thread()
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.

Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:23:57 +00:00
Benjamin Walsh
358a53cb2f kernel: support for more than 32 total priorities
In addition to more priorities taking more memory to host them, finding
the next thread to run when it is not cached is slower since each extra
set of 32 priorities maps to a loop iteration. That loop is remove
entirely when the number of priorities is less than 32 (31 + the idle
thread).

Fixes ZEP-1303.

Change-Id: I3205df90d379a0f4456ff1d7f1aaa67ad2cddf15
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-18 23:45:34 +00:00