The init_stack_frame is the same as the the ESF. No need to have two
separate structs. Consolidate everything into one single struct and make
register entries explicit.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Make the printing of errors a bit more descriptive and print the FAR_ELn
register only when strictly required.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
There are x86 platforms where the IRQ configuration register for PCIe
is not pre-populated and the OS needs to assign a number dynamically
by writing to the register.
In order to allocate interrupts we have to know which ones have been
hard-coded in device tree. We accomplish this by collecting these
values through the IRQ_CONNECT() macro and placing them in a dedicated
linker section (in ROM).
The full set of allocated interrupts are managed through a bitmap, and
the pre-allocated values (from the linker section) are inserted into
this upon initial runtime access.
This patch introduces a new pcie_alloc_irq() API that drivers can use
to allocate interrupt line numbers. The two in-tree drivers that were
using this API (I2C and UART) are converted to use the new API.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch. Register g7 is
used to point to the thread data. Thread data is accessed with negative
offsets from g7.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
SPARC is an open and royalty free processor architecture.
This commit provides SPARC architecture support to Zephyr. It is
compatible with the SPARC V8 specification and the SPARC ABI and is
independent of processor implementation.
Functionality specific to SPRAC processor implementations should
go in soc/sparc. One example is the LEON3 SOC which is part of this
patch set.
The architecture port is fully SPARC ABI compatible, including trap
handlers and interrupt context.
Number of implemented register windows can be configured.
Some SPARC V8 processors borrow the CASA (compare-and-swap) atomic
instructions from SPARC V9. An option has been defined in the
architecture port to forward the corresponding code-generation option
to the compiler.
Stack size related config options have been defined in sparc/Kconfig
to match the SPARC ABI.
Co-authored-by: Nikolaus Huber <nikolaus.huber.melk@gmail.com>
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
The Inter-core Debug Unit provides additional debug assist features in
multi-core scenarios.This commit allows ARConnect to conditionally
halt cores during debugging.
Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
- Move kobject list after .bss
The previous order shift kernel object address defined in prebuild
file (which is use as reference in running time). So it was impossible
for zephyr to check if a kernel object address was granted because
address changed during build.
- Add support for memory shared.
- Move sdata2 section in ROM because it contains constants.
Signed-off-by: Nicolas Royer <nroyer@baylibre.com>
The IRQ handler has had a major changes to manage syscall, reschedule
and interrupt from user thread and stack guard.
Add userspace support:
- Use a global variable to know if the current execution is user or
machine. The location of this variable is read only for all user
thread and read/write for kernel thread.
- Memory shared is supported.
- Use dynamic allocation to optimize PMP slot usage. If the area size
is a power of 2, only one PMP slot is used, else 2 are used.
Add stack guard support:
- Use MPRV bit to force PMP rules to machine mode execution.
- IRQ stack have a locked stack guard to avoid re-write PMP
configuration registers for each interruption and then win some
cycle.
- The IRQ stack is used as "temporary" stack at the beginning of IRQ
handler to save current ESF. That avoid to trigger write fault on
thread stack during store ESF which that call IRQ handler to
infinity.
- A stack guard is also setup for privileged stack of a user thread.
Thread:
- A PMP setup is specific to each thread. PMP setup are saved in each
thread structure to improve reschedule performance.
Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
Reviewed-by: Nicolas Royer <nroyer@baylibre.com>
- Set some helper function to write/clear/print PMP config registers.
- Add support for different PMP slot size function to core/board.
Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
We provide an option for low-memory systems to use a single set
of page tables for all threads. This is only supported if
KPTI and SMP are disabled. This configuration saves a considerable
amount of RAM, especially if multiple memory domains are used,
at a cost of context switching overhead.
Some caching techniques are used to reduce the amount of context
switch updates; the page tables aren't updated if switching to
a supervisor thread, and the page table configuration of the last
user thread switched in is cached.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We don't need this for stacks any more and only use this
for pre-calculating the boot page tables size. Move to C
code, this doesn't need to be in headers anywhere.
Names adjusted for conciseness.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- z_x86_userspace_enter() for both 32-bit and 64-bit now
call into C code to clear the stack buffer and set the
US bits in the page tables for the memory range.
- Page tables are now associated with memory domains,
instead of having separate page tables per thread.
A spinlock protects write access to these page tables,
and read/write access to the list of active page
tables.
- arch_mem_domain_init() implemented, allocating and
copying page tables from the boot page tables.
- struct arch_mem_domain defined for x86. It has
a page table link and also a list node for iterating
over them.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The current MMU code is assuming that both kernel and threads are both
running in EL1, not supporting EL0. Extend the support to EL0 by adding
the missing attribute to mirror the access / execute permissions to EL0.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Implement the functionality for configuring the
architecture core registers to their warm reset
values upon system initialization. We enable the
support of the feature in the Cortex-M architecture.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This is redundant and not coherent with the rest of the file. Thus
remove the _BIT suffix from the bit field names.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The errno_var copy in Xtensa's struct is not being used at all
for errno (as there is already one in struct k_thread).
So remove it.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Note that since Cortex-M does not have the thread ID or
process ID register needed to store TLS pointer at runtime
for toolchain to access thread data, a global variable is
used instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Implement the kernel "coherence" API on top of the linker
cached/uncached mapping work.
Add Xtensa handling for the stack coherence API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Instead of having some special stack frame when first scheduling new
thread and a new thread entry wrapper to pull out the needed data, we
can reuse the context restore code by adapting the initial stack frame.
This reduces the lines of code and simplify the code at the expense of a
slightly bigger initial stack frame.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
MWDT toolchain has Stackcheck_alloca option enabled by default.
So it adds stack checking in addition to Zephyr's stack checking.
As it is completely redundant let's drop it.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Previously MPU registers macros are only defined within its own header
files and could not be used by other part of program. This commit unify
them together.
Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.
Changes are related to converting the integer constants to the
unsigned integer constants
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
No need to mix super short version of names with other structures
having full name. Let's follow a more relevant naming where each and
every attribute name is self-documenting then. (such as s/id/apic_id
etc...)
Also make CONFIG_ACPI usable through IS_ENABLED by enclosing exposed
functions with ifdef CONFIG_ACPI.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
- x0/x1 register printing is reversed
- The error stack frame struct (z_arch_esf_t) had the SPSR and ELR in
the wrong position, inconsistent with the order these regs are pushed
to the stack in z_arm64_svc. This caused all register printing to be
skewed by two.
- Verified by writing known values (abcd0000 -> abcd000f) to x0 - x15
and then forcing a data abort.
Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
We need the same logic for each SOC, instead of copypasting
things just put this in a common file. This approach still
leaves the door open for custom memory layouts if desired.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
As discussed in #22668, there is additional risk ascociated with
splitting linker files, as one may update one script and not be
aware of the other. Especially related to updating GNU ld, and
not mwdt could break code for mwdt unnoticed, as mwdt is not
part of CI.
Let's create a single entry point for linker template.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
The non-secure callable functions' section is only applicable
to Cortex-M with TrustZone-M extension. Remove it from AARCH64
linker script. (CONFIG_ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCTIONS
is only enabled for Cortex-M so this is a no-op, but still, it
is a useful cleanup.)
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.
For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.
Furthermore, much of the assembly code used had issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
According to Intel 64 and IA-32 Architectures Software
Developer’s Manual, volume 3, chapter 8.2.5, LFENCE provides
a more efficient method of controlling memory ordering than
the CPUID instruction. So use LFENCE here, as all 64-bit
CPUs have LFENCE.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Do not force linker to place text sections after each other
to have more freedom to optimize.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Add linker script template for MWDT toolchain (linker-mwdt.ld)
Move linker.ld to linker-gnu.ld (without changes)
The "linker.ld" is wraper now.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Make the assembly codes compatible with both GNU
and Metaware toolchain.
* replace ".balign" with ".align"
".align" assembler directive is supposed by all
ARC toolchains and it is implemented in a same
way across ARC toolchains.
* replace "mov_s __certain_reg" with "mov __certain_reg"
Even though GCC encodes those mnemonics and even real
HW executes them according to PRM these are restricted
ones for mov_s and CCAC rightfully refuses to accept
such mnemonics. So for compatibility and clarity sake
we switch to 32-bit mov instruction which allows use
of all those instructions.
* Add "%%" prefix while accessing registers from inline
ASM as it is required by MWDT.
* Drop "@" prefix while accessing symbols (defined in C
code) from ASM code as it is required by MWDT.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
/#