On x86_64, the arch_timing_* variables are not set which
results in incorrect values being used in the timing_info
benchmarks. So instrument the code for those values.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If IO APIC is in logical destination mode, local APICs compare their
logical APIC ID defined in LDR (Logical Destination Register) with
the destination code sent with the interrupt to determine whether or not
to accept the incoming interrupt.
This patch programs LDR in xAPIC mode to support IO APIC logical mode.
The local APIC ID from local APIC ID register can't be used as the
'logical APIC ID' because LAPIC ID may not be consecutive numbers hence
it makes it impossible for LDR to encode 8 IDs within 8 bits.
This patch chooses 0 for BSP, and for APs, cpu_number which is the index
to x86_cpuboot[], which ultimately assigned in z_smp_init[].
Signed-off-by: Zide Chen <zide.chen@intel.com>
x86_64 supports 4 levels of interrupt nesting, with
the interrupt stack divided up into sub-stacks for
each nesting level.
Unfortunately, the initial interrupt stack pointer
on the first CPU was not taking into account reserved
space for guard areas, causing a stack overflow exception
when attempting to use the last interrupt nesting level,
as that page had been set up as a stack guard.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().
Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.
There is now a centralized declaration in kernel_internal.h.
On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.
The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.
The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If IO APIC is in logical destination mode, local APICs compare their
logical APIC ID defined in LDR (Logical Destination Register) with
the destination code sent with the interrupt to determine whether or not
to accept the incoming interrupt.
This patch programs LDR in xAPIC mode to support IO APIC logical mode.
The local APIC ID from local APIC ID register can't be used as the
'logical APIC ID' because LAPIC ID may not be consecutive numbers hence
it makes it impossible for LDR to encode 8 IDs within 8 bits.
This patch chooses 0 for BSP, and for APs, cpu_number which is the index
to x86_cpuboot[], which ultimately assigned in z_smp_init[].
Signed-off-by: Zide Chen <zide.chen@intel.com>
The callee-saved registers have been separated out and will not
be saved/restored if exception debugging is shut off.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The context switch implementation forgot to save the current flag
state of the old thread, so on resume the flags would be restored to
whatever value they had at the last interrupt preemption or thread
initialization. In practice this guaranteed that the interrupt enable
bit would always be wrong, becuase obviously new threads and preempted
ones have interrupts enabled, while arch_switch() is always called
with them masked. This opened up a race between exit from
arch_switch() and the final exit path in z_swap().
The other state bits weren't relevant -- the oddball ones aren't used
by Zephyr, and as arch_switch() on this architecture is a function
call the compiler would have spilled the (caller-save) comparison
result flags anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a bug where double-dispatch of a single thread on multiple
SMP CPUs was possible. This can be mind-bending to diagnose, so when
CONFIG_ASSERT is enabled add an extra instruction to __resume (the
shared code path for both interupt return and context switch) that
poisons the shared RIP of the now-running thread with a recognizable
invalid value.
Now attempts to run the thread again will crash instantly with a
discoverable cookie in their instruction pointer, and this will remain
true until it gets a new RIP at the next interrupt or switch.
This is under CONFIG_ASSERT because it meets the same design goals of
"a cheap test for impossible situations", not because it's part of the
assertion framework.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme. As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.
Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Implement a set of per-cpu trampoline stacks which all
interrupts and exceptions will initially land on, and also
as an intermediate stack for privilege changes as we need
some stack space to swap page tables.
Set up the special trampoline page which contains all the
trampoline stacks, TSS, and GDT. This page needs to be
present in the user page tables or interrupts don't work.
CPU exceptions, with KPTI turned on, are treated as interrupts
and not traps so that we have IRQs locked on exception entry.
Add some additional macros for defining IDT entries.
Add special handling of locore text/rodata sections when
creating user mode page tables on x86-64.
Restore qemu_x86_64 to use KPTI, and remove restrictions on
enabling user mode on x86-64.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
See CVE-2019-1125. We mitigate this by adding an 'lfence'
upon interrupt/exception entry after the decision has been
made whether it's necessary to invoke 'swapgs' or not.
Only applies to x86_64, 32-bit doesn't use swapgs.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- In early boot, enable the syscall instruction and set up
necessary MSRs
- Add a hook to update page tables on context switch
- Properly initialize thread based on whether it will
start in user or supervisor mode
- Add landing function for system calls to execute the
desired handler
- Implement arch_user_string_nlen()
- Implement logic for dropping a thread down to user mode
- Reserve per-CPU storage space for user and privilege
elevation stack pointers, necessary for handling syscalls
when no free registers are available
- Proper handling of gs register considerations when
transitioning privilege levels
Kernel page table isolation (KPTI) is not yet implemented.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We use a fixed value of 32 as the way interrupts/exceptions
are setup in x86_64's locore.S do not lend themselves to
Kconfig configuration of the vector to use.
HW-based kernel oops is now permanently on, there's no reason
to make it optional that I can see.
Default vectors for IPI and irq offload adjusted to not
collide.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We don't need to set up GDT data descriptors for setting
%gs. Instead, we use the x86 MSRs to set GS_BASE and
KERNEL_GS_BASE.
We don't currently allow user mode to set %gs on its own,
but later on if we do, we have everything set up to issue
'swapgs' instructions on syscall or IRQ.
Unused entries in the GDT have been removed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These were previously assumed to always be fatal.
We can't have the faulting thread's XMM registers
clobbered, so put the SIMD/FPU state onto the stack
as well. This is fairly large (512 bytes) and the
execption stack is already uncomfortably small, so
increase to 2K.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We now dump more information for less common cases,
and this is now centralized code for 32-bit/64-bit.
All of this code is now correctly wrapped around
CONFIG_EXCEPTION_DEBUG. Some cruft and unused defines
removed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.
This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some code for unwinding stacks and z_x86_fatal_error()
now in a common C file, suitable for both modes.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The page tables to use are now stored in the cpuboot struct.
For the first CPU, we set to the flat page tables, and then
update later in z_x86_prep_c() once the runtime tables have
been generated.
For other CPUs, by the time we get to z_arch_start_cpu()
the runtime tables are ready do go, and so we just install
them directly.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Program text, rodata, and data need different MMU
permissions. Split out rodata and data from the program
text, updating the linker script appropriately.
Region size symbols added to the linker script, so these
can later be used with MMU_BOOT_REGION().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Duplicate definitions elsewhere have been removed.
A couple functions which are defined by the arch interface
to be non-inline, but were implemented inline by native_posix
and intel64, have been moved to non-inline.
Some missing conditional compilation for z_arch_irq_offload()
has been fixed, as this is an optional feature.
Some massaging of native_posix headers to get everything
in the right scope.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The intel64 switch implementation doesn't actually use a switch handle
per se, just the raw thread struct pointers which get stored into the
handle field. This works fine for normally initialized threads, but
when switching out of a dummy thread at initialization, nothing has
initialized that field and the code was dumping registers into the
bottom of memory through the resulting NULL pointer.
Fix this by skipping the load of the field value and just using an
offset instead to get the struct address, which is actually slightly
faster anyway (a SUB immediate instruction vs. the load).
Actually for extra credit we could even move the switch_handle field
to the top of the thread struct and eliminate the instruction
entirely, though if we did that it's probably worth adding some
conditional code to make the switch_handle field disappear entirely.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Line up everything nicely, add leading '0x' to hex
addresses, and remove redundant newlines. Add
whitespace between the register name and contents
so the contents can be easily selected from a terminal.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Set the NXE bit in the EFER MSR so that the NX bit can
be set in page tables. Otherwise, the NX bit is treated
as reserved and leads to a fault if set.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It's possible to have multiple processors configured without using the
SMP scheduler, so don't make definitions dependent on CONFIG_SMP.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
In non-SMP MP situations, the interrupt stacks might not exist, so
do not assume they do. Instead, initialize the TSS IST1 from the
cpuboot[] vector (meaning, on APs, the stack from z_arch_start_cpu).
Eliminates redundancy at the same time.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This is the Wrong Thing(tm) with SMP enabled. Previously this
worked because interrupts would be re-enabled in the interrupt
entry sequence, but this is no longer the case.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
was ignoring the rest of the expression, though the effect was
harmless (including unreachable code in some builds).
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Add duplicate per-CPU data structures (x86_cpuboot, tss, stacks, etc.)
for up to 4 total CPUs, add code in locore and z_arch_start_cpu().
The test board, qemu_x86_long, now defaults to 2 CPUs.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Take a dummy first argument, so that the BSP entry point (z_x86_prep_c)
has the same signature as the AP entry point (smp_init_top).
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
A new 'struct x86_cpuboot' is created as well as an instance called
'x86_cpuboot[]' which contains per-CPU boot data (initial stack,
entry function/arg, selectors, etc.). The locore now consults this
table to set up per-CPU registers, etc. during early boot.
Also, rename tss.c to cpu.c as its scope is growing.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
There's no need to qualify the 64-bit CS/DS selectors, and the GS and
TR selectors are renamed CPU0_GS and CPU0_TR as they are CPU-specific.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
In some places the code was being overly pedantic; e.g., there is no
need to load our own 32-bit descriptors because the loader's are fine
for our purposes. We can defer loading our own segments until 64-bit.
The sequence is re-ordered to faciliate code sharing between the BSP
and APs when SMP is enabled (all BSP-specific operations occur before
the per-CPU initialization).
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This is really just to facilitate CPU bootstrap code between
the BSP and the APs, moving the clear operation out of the way.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
In the general case, the local APIC can't be treated as a normal device
with a single boot-time initialization - on SMP systems, each CPU must
initialize its own. Hence the initialization proper is separated from
the device-driver initialization, and said initialization is called
from the early startup-assembly code when appropriate.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>