zephyr/kernel
Andy Ross 7df0216d1e kernel/mutex: Spinlockify
Use a subsystem lock, not a per-object lock.  Really we want to lock
at mutex granularity where possible, but (1) that has non-trivial
memory overhead vs. e.g. directly spinning on the mutex state and (2)
the locking in a few places was originally designed to protect access
to the mutex *owner* priority, which is not 1:1 with a single mutex.

Basically the priority-inheriting mutex code will need some rework
before it works as a fine-grained locking abstraction in SMP.

Note that this fixes an invisible bug: with the older code,
k_mutex_unlock() would actually call irq_unlock() twice along the path
where there was a new owner, which is benign on existing architectures
(so long as the key argument is unchanged) but was never guaranteed to
work.  With a spinlock, unlocking an unlocked/unowned lock is a
detectable assertion condition.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
..
include kernel: Add _unlocked() variant to context switch primitives 2019-02-08 14:49:39 -05:00
atomic_c.c kernel/atomic_c: Spinlockify 2019-02-08 14:49:39 -05:00
CMakeLists.txt
compiler_stack_protect.c
device.c
errno.c
idle.c power: Eliminate SYS_PM_* power states. 2019-02-08 09:07:00 -05:00
init.c kernel: Add _unlocked() variant to context switch primitives 2019-02-08 14:49:39 -05:00
int_latency_bench.c
Kconfig
Kconfig.power_mgmt
mailbox.c kernel: Split reschdule & pend into irq/spin lock versions 2019-02-08 14:49:39 -05:00
mem_domain.c kernel/mem_domain: Spinlockify 2019-02-08 14:49:39 -05:00
mem_slab.c kernel/mem_slab: Spinlockify 2019-02-08 14:49:39 -05:00
mempool.c kernel/mempool: Spinlockify 2019-02-08 14:49:39 -05:00
msg_q.c kernel: Split reschdule & pend into irq/spin lock versions 2019-02-08 14:49:39 -05:00
mutex.c kernel/mutex: Spinlockify 2019-02-08 14:49:39 -05:00
pipes.c kernel: Split reschdule & pend into irq/spin lock versions 2019-02-08 14:49:39 -05:00
poll.c kernel/poll: Spinlockify 2019-02-08 14:49:39 -05:00
queue.c kernel/queue: Spinlockify 2019-02-08 14:49:39 -05:00
sched.c kernel: Add _unlocked() variant to context switch primitives 2019-02-08 14:49:39 -05:00
sem.c kernel/k_sem: Spinlockify 2019-02-08 14:49:39 -05:00
smp.c kernel: Add _unlocked() variant to context switch primitives 2019-02-08 14:49:39 -05:00
stack.c kernel: Split reschdule & pend into irq/spin lock versions 2019-02-08 14:49:39 -05:00
system_work_q.c
thread_abort.c kernel/thread_abort: Remove needless locking 2019-02-08 14:49:39 -05:00
thread.c kernel/thread: Spinlockify 2019-02-08 14:49:39 -05:00
timeout.c
timer.c kernel: Split reschdule & pend into irq/spin lock versions 2019-02-08 14:49:39 -05:00
userspace_handler.c
userspace.c
version.c
work_q.c kernel/work_q: Spinlockify 2019-02-08 14:49:39 -05:00