blob: bd157d8bcc7d94c71ffa45df76a0f800368938e4 [file] [log] [blame]
# Kernel configuration options
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# SPDX-License-Identifier: Apache-2.0
menu "General Kernel Options"
module = KERNEL
module-str = kernel
source "subsys/logging/Kconfig.template.log_config"
config MULTITHREADING
bool "Multi-threading" if ARCH_HAS_SINGLE_THREAD_SUPPORT
default y
help
If disabled, only the main thread is available, so a main() function
must be provided. Interrupts are available. Kernel objects will most
probably not behave as expected, especially with regards to pending,
since the main thread cannot pend, it being the only thread in the
system.
Many drivers and subsystems will not work with this option
set to 'n'; disable only when you REALLY know what you are
doing.
config NUM_COOP_PRIORITIES
int "Number of coop priorities" if MULTITHREADING
default 1 if !MULTITHREADING
default 16
range 0 128
help
Number of cooperative priorities configured in the system. Gives access
to priorities:
K_PRIO_COOP(0) to K_PRIO_COOP(CONFIG_NUM_COOP_PRIORITIES - 1)
or seen another way, priorities:
-CONFIG_NUM_COOP_PRIORITIES to -1
This can be set to zero to disable cooperative scheduling. Cooperative
threads always preempt preemptible threads.
The total number of priorities is
NUM_COOP_PRIORITIES + NUM_PREEMPT_PRIORITIES + 1
The extra one is for the idle thread, which must run at the lowest
priority, and be the only thread at that priority.
config NUM_PREEMPT_PRIORITIES
int "Number of preemptible priorities" if MULTITHREADING
default 0 if !MULTITHREADING
default 15
range 0 128
help
Number of preemptible priorities available in the system. Gives access
to priorities 0 to CONFIG_NUM_PREEMPT_PRIORITIES - 1.
This can be set to 0 to disable preemptible scheduling.
The total number of priorities is
NUM_COOP_PRIORITIES + NUM_PREEMPT_PRIORITIES + 1
The extra one is for the idle thread, which must run at the lowest
priority, and be the only thread at that priority.
config MAIN_THREAD_PRIORITY
int "Priority of initialization/main thread"
default -2 if !PREEMPT_ENABLED
default 0
help
Priority at which the initialization thread runs, including the start
of the main() function. main() can then change its priority if desired.
config COOP_ENABLED
def_bool (NUM_COOP_PRIORITIES != 0)
config PREEMPT_ENABLED
def_bool (NUM_PREEMPT_PRIORITIES != 0)
config PRIORITY_CEILING
int "Priority inheritance ceiling"
default -127
help
This defines the minimum priority value (i.e. the logically
highest priority) that a thread will acquire as part of
k_mutex priority inheritance.
config NUM_METAIRQ_PRIORITIES
int "Number of very-high priority 'preemptor' threads"
default 0
help
This defines a set of priorities at the (numerically) lowest
end of the range which have "meta-irq" behavior. Runnable
threads at these priorities will always be scheduled before
threads at lower priorities, EVEN IF those threads are
otherwise cooperative and/or have taken a scheduler lock.
Making such a thread runnable in any way thus has the effect
of "interrupting" the current task and running the meta-irq
thread synchronously, like an exception or system call. The
intent is to use these priorities to implement "interrupt
bottom half" or "tasklet" behavior, allowing driver
subsystems to return from interrupt context but be guaranteed
that user code will not be executed (on the current CPU)
until the remaining work is finished. As this breaks the
"promise" of non-preemptibility granted by the current API
for cooperative threads, this tool probably shouldn't be used
from application code.
config SCHED_DEADLINE
bool "Earliest-deadline-first scheduling"
help
This enables a simple "earliest deadline first" scheduling
mode where threads can set "deadline" deltas measured in
k_cycle_get_32() units. Priority decisions within (!!) a
single priority will choose the next expiring deadline and
not simply the least recently added thread.
config SCHED_CPU_MASK
bool "CPU mask affinity/pinning API"
depends on SCHED_DUMB
help
When true, the application will have access to the
k_thread_cpu_mask_*() APIs which control per-CPU affinity masks in
SMP mode, allowing applications to pin threads to specific CPUs or
disallow threads from running on given CPUs. Note that as currently
implemented, this involves an inherent O(N) scaling in the number of
idle-but-runnable threads, and thus works only with the DUMB
scheduler (as SCALABLE and MULTIQ would see no benefit).
Note that this setting does not technically depend on SMP and is
implemented without it for testing purposes, but for obvious reasons
makes sense as an application API only where there is more than one
CPU. With one CPU, it's just a higher overhead version of
k_thread_start/stop().
config SCHED_CPU_MASK_PIN_ONLY
bool "CPU mask variant with single-CPU pinning only"
depends on SMP && SCHED_CPU_MASK
help
When true, enables a variant of SCHED_CPU_MASK where only
one CPU may be specified for every thread. Effectively, all
threads have a single "assigned" CPU and they will never be
scheduled symmetrically. In general this is not helpful,
but some applications have a carefully designed threading
architecture and want to make their own decisions about how
to assign work to CPUs. In that circumstance, some moderate
optimizations can be made (e.g. having a separate run queue
per CPU, keeping the list length shorter). When selected,
the CPU mask becomes an immutable thread attribute. It can
only be modified before a thread is started. Most
applications don't want this.
config MAIN_STACK_SIZE
int "Size of stack for initialization and main thread"
default 2048 if COVERAGE_GCOV
default 512 if ZTEST && !(RISCV || X86 || ARM)
default 1024
help
When the initialization is complete, the thread executing it then
executes the main() routine, so as to reuse the stack used by the
initialization, which would be wasted RAM otherwise.
After initialization is complete, the thread runs main().
config IDLE_STACK_SIZE
int "Size of stack for idle thread"
default 2048 if COVERAGE_GCOV
default 1024 if XTENSA
default 512 if RISCV
default 384 if DYNAMIC_OBJECTS
default 320 if ARC || (ARM && CPU_HAS_FPU) || (X86 && MMU)
default 256
help
Depending on the work that the idle task must do, most likely due to
power management but possibly to other features like system event
logging (e.g. logging when the system goes to sleep), the idle thread
may need more stack space than the default value.
config ISR_STACK_SIZE
int "ISR and initialization stack size (in bytes)"
default 2048
help
This option specifies the size of the stack used by interrupt
service routines (ISRs), and during kernel initialization.
config THREAD_STACK_INFO
bool "Thread stack info"
help
This option allows each thread to store the thread stack info into
the k_thread data structure.
config THREAD_CUSTOM_DATA
bool "Thread custom data"
help
This option allows each thread to store 32 bits of custom data,
which can be accessed using the k_thread_custom_data_xxx() APIs.
config THREAD_USERSPACE_LOCAL_DATA
bool
depends on USERSPACE
default y if ERRNO && !ERRNO_IN_TLS && !LIBC_ERRNO
config DYNAMIC_THREAD
bool "Support for dynamic threads [EXPERIMENTAL]"
select EXPERIMENTAL
depends on THREAD_STACK_INFO
select DYNAMIC_OBJECTS if USERSPACE
help
Enable support for dynamic threads and stacks.
if DYNAMIC_THREAD
config DYNAMIC_THREAD_STACK_SIZE
int "Size of each pre-allocated thread stack"
default 1024 if !64BIT
default 2048 if 64BIT
help
Default stack size (in bytes) for dynamic threads.
config DYNAMIC_THREAD_ALLOC
bool "Support heap-allocated thread objects and stacks"
help
Select this option to enable allocating thread object and
thread stacks from the system heap.
Only use this type of allocation in situations
where malloc is permitted.
config DYNAMIC_THREAD_POOL_SIZE
int "Number of statically pre-allocated threads"
default 0
range 0 8192
help
Pre-allocate a fixed number of thread objects and
stacks at build time.
This type of "dynamic" stack is usually suitable in
situations where malloc is not permitted.
choice DYNAMIC_THREAD_PREFER
prompt "Preferred dynamic thread allocator"
default DYNAMIC_THREAD_PREFER_POOL
help
If both CONFIG_DYNAMIC_THREAD_ALLOC=y and
CONFIG_DYNAMIC_THREAD_POOL_SIZE > 0, then the user may
specify the order in which allocation is attmpted.
config DYNAMIC_THREAD_PREFER_ALLOC
bool "Prefer heap-based allocation"
depends on DYNAMIC_THREAD_ALLOC
help
Select this option to attempt a heap-based allocation
prior to any pool-based allocation.
config DYNAMIC_THREAD_PREFER_POOL
bool "Prefer pool-based allocation"
help
Select this option to attempt a pool-based allocation
prior to any heap-based allocation.
endchoice # DYNAMIC_THREAD_PREFER
endif # DYNAMIC_THREADS
config LIBC_ERRNO
bool
help
Use external libc errno, not the internal one. This eliminates any
locally allocated errno storage and usage.
config ERRNO
bool "Errno support"
default y
help
Enable per-thread errno in the kernel. Application and library code must
include errno.h provided by the C library (libc) to use the errno
symbol. The C library must access the per-thread errno via the
z_errno() symbol.
config ERRNO_IN_TLS
bool "Store errno in thread local storage (TLS)"
depends on ERRNO && THREAD_LOCAL_STORAGE && !LIBC_ERRNO
default y
help
Use thread local storage to store errno instead of storing it in
the kernel thread struct. This avoids a syscall if userspace is enabled.
config CURRENT_THREAD_USE_NO_TLS
bool
help
Hidden symbol to not use thread local storage to store current
thread.
config CURRENT_THREAD_USE_TLS
bool "Store current thread in thread local storage (TLS)"
depends on THREAD_LOCAL_STORAGE && !CURRENT_THREAD_USE_NO_TLS
default y
help
Use thread local storage to store the current thread. This avoids a
syscall if userspace is enabled.
choice SCHED_ALGORITHM
prompt "Scheduler priority queue algorithm"
default SCHED_DUMB
help
The kernel can be built with with several choices for the
ready queue implementation, offering different choices between
code size, constant factor runtime overhead and performance
scaling when many threads are added.
config SCHED_DUMB
bool "Simple linked-list ready queue"
help
When selected, the scheduler ready queue will be implemented
as a simple unordered list, with very fast constant time
performance for single threads and very low code size.
Choose this on systems with constrained code size that will
never see more than a small number (3, maybe) of runnable
threads in the queue at any given time. On most platforms
(that are not otherwise using the red/black tree) this
results in a savings of ~2k of code size.
config SCHED_SCALABLE
bool "Red/black tree ready queue"
help
When selected, the scheduler ready queue will be implemented
as a red/black tree. This has rather slower constant-time
insertion and removal overhead, and on most platforms (that
are not otherwise using the rbtree somewhere) requires an
extra ~2kb of code. But the resulting behavior will scale
cleanly and quickly into the many thousands of threads. Use
this on platforms where you may have many threads (very
roughly: more than 20 or so) marked as runnable at a given
time. Most applications don't want this.
config SCHED_MULTIQ
bool "Traditional multi-queue ready queue"
depends on !SCHED_DEADLINE
help
When selected, the scheduler ready queue will be implemented
as the classic/textbook array of lists, one per priority
(max 32 priorities). This corresponds to the scheduler
algorithm used in Zephyr versions prior to 1.12. It incurs
only a tiny code size overhead vs. the "dumb" scheduler and
runs in O(1) time in almost all circumstances with very low
constant factor. But it requires a fairly large RAM budget
to store those list heads, and the limited features make it
incompatible with features like deadline scheduling that
need to sort threads more finely, and SMP affinity which
need to traverse the list of threads. Typical applications
with small numbers of runnable threads probably want the
DUMB scheduler.
endchoice # SCHED_ALGORITHM
choice WAITQ_ALGORITHM
prompt "Wait queue priority algorithm"
default WAITQ_DUMB
help
The wait_q abstraction used in IPC primitives to pend
threads for later wakeup shares the same backend data
structure choices as the scheduler, and can use the same
options.
config WAITQ_SCALABLE
bool "Use scalable wait_q implementation"
help
When selected, the wait_q will be implemented with a
balanced tree. Choose this if you expect to have many
threads waiting on individual primitives. There is a ~2kb
code size increase over WAITQ_DUMB (which may be shared with
SCHED_SCALABLE) if the rbtree is not used elsewhere in the
application, and pend/unpend operations on "small" queues
will be somewhat slower (though this is not generally a
performance path).
config WAITQ_DUMB
bool "Simple linked-list wait_q"
help
When selected, the wait_q will be implemented with a
doubly-linked list. Choose this if you expect to have only
a few threads blocked on any single IPC primitive.
endchoice # WAITQ_ALGORITHM
menu "Kernel Debugging and Metrics"
config INIT_STACKS
bool "Initialize stack areas"
help
This option instructs the kernel to initialize stack areas with a
known value (0xaa) before they are first used, so that the high
water mark can be easily determined. This applies to the stack areas
for threads, as well as to the interrupt stack.
config SKIP_BSS_CLEAR
bool
help
This option disables software .bss section zeroing during Zephyr
initialization. Such boot-time optimization could be used for
platforms where .bss section is zeroed-out externally.
Please pay attention that when this option is enabled
the responsibility for .bss zeroing in all possible scenarios
(mind e.g. SW reset) is delegated to the external SW or HW.
config BOOT_BANNER
bool "Boot banner"
default y
select PRINTK
select EARLY_CONSOLE
help
This option outputs a banner to the console device during boot up.
config BOOT_BANNER_STRING
string "Boot banner string"
depends on BOOT_BANNER
default "Booting Zephyr OS build"
help
Use this option to set the boot banner.
config BOOT_DELAY
int "Boot delay in milliseconds"
depends on MULTITHREADING
default 0
help
This option delays bootup for the specified amount of
milliseconds. This is used to allow serial ports to get ready
before starting to print information on them during boot, as
some systems might boot to fast for a receiving endpoint to
detect the new USB serial bus, enumerate it and get ready to
receive before it actually gets data. A similar effect can be
achieved by waiting for DCD on the serial port--however, not
all serial ports have DCD.
config THREAD_MONITOR
bool "Thread monitoring"
help
This option instructs the kernel to maintain a list of all threads
(excluding those that have not yet started or have already
terminated).
config THREAD_NAME
bool "Thread name"
help
This option allows to set a name for a thread.
config THREAD_MAX_NAME_LEN
int "Max length of a thread name"
default 32
default 64 if ZTEST
range 8 128
depends on THREAD_NAME
help
Thread names get stored in the k_thread struct. Indicate the max
name length, including the terminating NULL byte. Reduce this value
to conserve memory.
config INSTRUMENT_THREAD_SWITCHING
bool
menuconfig THREAD_RUNTIME_STATS
bool "Thread runtime statistics"
help
Gather thread runtime statistics.
For example:
- Thread total execution cycles
- System total execution cycles
if THREAD_RUNTIME_STATS
config THREAD_RUNTIME_STATS_USE_TIMING_FUNCTIONS
bool "Use timing functions to gather statistics"
select TIMING_FUNCTIONS_NEED_AT_BOOT
help
Use timing functions to gather thread runtime statistics.
Note that timing functions may use a different timer than
the default timer for OS timekeeping.
config SCHED_THREAD_USAGE
bool "Collect thread runtime usage"
default y
select INSTRUMENT_THREAD_SWITCHING if !USE_SWITCH
help
Collect thread runtime info at context switch time
config SCHED_THREAD_USAGE_ANALYSIS
bool "Analyze the collected thread runtime usage statistics"
default n
depends on SCHED_THREAD_USAGE
select INSTRUMENT_THREAD_SWITCHING if !USE_SWITCH
help
Collect additional timing information related to thread scheduling
for analysis purposes. This includes the total time that a thread
has been scheduled, the longest time for which it was scheduled and
others.
config SCHED_THREAD_USAGE_ALL
bool "Collect total system runtime usage"
default y if SCHED_THREAD_USAGE
depends on SCHED_THREAD_USAGE
help
Maintain a sum of all non-idle thread cycle usage.
config SCHED_THREAD_USAGE_AUTO_ENABLE
bool "Automatically enable runtime usage statistics"
default y
depends on SCHED_THREAD_USAGE
help
When set, this option automatically enables the gathering of both
the thread and CPU usage statistics.
endif # THREAD_RUNTIME_STATS
endmenu
menuconfig OBJ_CORE
bool "Object core framework"
default n
help
This option enables the object core framework. This will link
participating kernel objects and their respective types together
in a way that allows them to both have common information stored
together and for that information to be easily retrieved by
automated means.
if OBJ_CORE
config OBJ_CORE_CONDVAR
bool "Integrate condition variables into object core framework"
default y
help
When enabled, this option integrates condition variables into the
object core framework.
config OBJ_CORE_EVENT
bool "Integrate events into object core framework"
default y if EVENTS
help
When enabled, this option integrate kernel events into the object
core framework.
config OBJ_CORE_FIFO
bool "Integrate FIFOs into object core framework"
default y
help
When enabled, this option integrates FIFOs into the object core
framework.
config OBJ_CORE_LIFO
bool "Integrate LIFOs into object core framework"
default y
help
When enabled, this option integrates LIFOs into the object core
framework.
config OBJ_CORE_MAILBOX
bool "Integrate mailboxes into object core framework"
default y
help
When enabled, this option integrates mailboxes into the object core
framework.
config OBJ_CORE_MEM_SLAB
bool "Integrate memory slabs into object core framework"
default y
help
When enabled, this option integrates memory slabs into the object
core framework.
config OBJ_CORE_MUTEX
bool "Integrate mutexes into object core framework"
default y
help
When enabled, this option integrates mutexes into the object core
framework.
config OBJ_CORE_MSGQ
bool "Integrate message queues into object core framework"
default y
help
When enabled, this option integrates message queues into the object
core framework.
config OBJ_CORE_SEM
bool "Integrate semaphores into object core framework"
default y
help
When enabled, this option integrates semaphores into the object core
framework.
config OBJ_CORE_PIPE
bool "Integrate pipe into object core framework"
default y if PIPES
help
When enabled, this option integrates pipes into the object core
framework.
config OBJ_CORE_SEM
bool "Integrate semaphores into object core framework"
default y
help
When enabled, this option integrates semaphores into the object core
framework.
config OBJ_CORE_STACK
bool "Integrate stacks into object core framework"
default y
help
When enabled, this option integrates stacks into the object core
framework.
config OBJ_CORE_THREAD
bool "Integrate threads into object core framework"
default y
help
When enabled, this option integrates threads into the object core
framework.
config OBJ_CORE_TIMER
bool "Integrate timers into object core framework"
default y
help
When enabled, this option integrates timers into the object core
framework.
config OBJ_CORE_SYSTEM
bool
default y
help
When enabled, this option integrates the internal CPU and kernel
system objects into the object core framework. As these are internal
structures, this option is hidden by default and only available to
advanced users.
menuconfig OBJ_CORE_STATS
bool "Object core statistics"
default n
help
This option integrates statistics gathering into the object core
framework.
if OBJ_CORE_STATS
config OBJ_CORE_STATS_MEM_SLAB
bool "Object core statistics for memory slabs"
default y if OBJ_CORE_MEM_SLAB
help
When enabled, this allows memory slab statistics to be integrated
into kernel objects.
config OBJ_CORE_STATS_THREAD
bool "Object core statistics for threads"
default y if OBJ_CORE_THREAD
select THREAD_RUNTIME_STATS
help
When enabled, this integrates thread runtime statistics into the
object core statistics framework.
config OBJ_CORE_STATS_SYSTEM
bool "Object core statistics for system level objects"
default y if OBJ_CORE_SYSTEM
select SCHED_THREAD_USAGE_ALL
help
When enabled, this integrates thread runtime statistics at the
CPU and system level into the object core statistics framework.
endif # OBJ_CORE_STATS
endif # OBJ_CORE
menu "Work Queue Options"
config SYSTEM_WORKQUEUE_STACK_SIZE
int "System workqueue stack size"
default 4096 if COVERAGE_GCOV
default 2560 if WIFI_NM_WPA_SUPPLICANT
default 1024
config SYSTEM_WORKQUEUE_PRIORITY
int "System workqueue priority"
default -2 if COOP_ENABLED && !PREEMPT_ENABLED
default 0 if !COOP_ENABLED
default -1
help
By default, system work queue priority is the lowest cooperative
priority. This means that any work handler, once started, won't
be preempted by any other thread until finished.
config SYSTEM_WORKQUEUE_NO_YIELD
bool "Select whether system work queue yields"
help
By default, the system work queue yields between each work item, to
prevent other threads from being starved. Selecting this removes
this yield, which may be useful if the work queue thread is
cooperative and a sequence of work items is expected to complete
without yielding.
endmenu
menu "Barrier Operations"
config BARRIER_OPERATIONS_BUILTIN
bool
help
Use the compiler builtin functions for barrier operations. This is
the preferred method. However, support for all arches in GCC is
incomplete.
config BARRIER_OPERATIONS_ARCH
bool
help
Use when there isn't support for compiler built-ins, but you have
written optimized assembly code under arch/ which implements these.
endmenu
menu "Atomic Operations"
config ATOMIC_OPERATIONS_BUILTIN
bool
help
Use the compiler builtin functions for atomic operations. This is
the preferred method. However, support for all arches in GCC is
incomplete.
config ATOMIC_OPERATIONS_ARCH
bool
help
Use when there isn't support for compiler built-ins, but you have
written optimized assembly code under arch/ which implements these.
config ATOMIC_OPERATIONS_C
bool
help
Use atomic operations routines that are implemented entirely
in C by locking interrupts. Selected by architectures which either
do not have support for atomic operations in their instruction
set, or haven't been implemented yet during bring-up, and also
the compiler does not have support for the atomic __sync_* builtins.
endmenu
menu "Timer API Options"
config TIMESLICING
bool "Thread time slicing"
default y
depends on SYS_CLOCK_EXISTS && (NUM_PREEMPT_PRIORITIES != 0)
help
This option enables time slicing between preemptible threads of
equal priority.
config TIMESLICE_SIZE
int "Time slice size (in ms)"
default 0
range 0 2147483647
depends on TIMESLICING
help
This option specifies the maximum amount of time a thread can execute
before other threads of equal priority are given an opportunity to run.
A time slice size of zero means "no limit" (i.e. an infinitely large
time slice).
config TIMESLICE_PRIORITY
int "Time slicing thread priority ceiling"
default 0
range 0 NUM_PREEMPT_PRIORITIES
depends on TIMESLICING
help
This option specifies the thread priority level at which time slicing
takes effect; threads having a higher priority than this ceiling are
not subject to time slicing.
config TIMESLICE_PER_THREAD
bool "Support per-thread timeslice values"
depends on TIMESLICING
help
When set, this enables an API for setting timeslice values on
a per-thread basis, with an application callback invoked when
a thread reaches the end of its timeslice.
config POLL
bool "Async I/O Framework"
help
Asynchronous notification framework. Enable the k_poll() and
k_poll_signal_raise() APIs. The former can wait on multiple events
concurrently, which can be either directly triggered or triggered by
the availability of some kernel objects (semaphores and FIFOs).
endmenu
menu "Other Kernel Object Options"
config MEM_SLAB_TRACE_MAX_UTILIZATION
bool "Getting maximum slab utilization"
help
This adds variable to the k_mem_slab structure to hold
maximum utilization of the slab.
config NUM_MBOX_ASYNC_MSGS
int "Maximum number of in-flight asynchronous mailbox messages"
default 10
help
This option specifies the total number of asynchronous mailbox
messages that can exist simultaneously, across all mailboxes
in the system.
Setting this option to 0 disables support for asynchronous
mailbox messages.
config EVENTS
bool "Event objects"
help
This option enables event objects. Threads may wait on event
objects for specific events, but both threads and ISRs may deliver
events to event objects.
Note that setting this option slightly increases the size of the
thread structure.
config PIPES
bool "Pipe objects"
help
This option enables kernel pipes. A pipe is a kernel object that
allows a thread to send a byte stream to another thread. Pipes can
be used to synchronously transfer chunks of data in whole or in part.
config KERNEL_MEM_POOL
bool "Use Kernel Memory Pool"
default y
help
Enable the use of kernel memory pool.
Say y if unsure.
if KERNEL_MEM_POOL
config HEAP_MEM_POOL_SIZE
int "Heap memory pool size (in bytes)"
default 0 if !POSIX_MQUEUE
default 1024 if POSIX_MQUEUE
help
This option specifies the size of the heap memory pool used when
dynamically allocating memory using k_malloc(). The maximum size of
the memory pool is only limited to available memory. A size of zero
means that no heap memory pool is defined.
endif # KERNEL_MEM_POOL
endmenu
config ARCH_HAS_CUSTOM_SWAP_TO_MAIN
bool
help
It's possible that an architecture port cannot use _Swap() to swap to
the _main() thread, but instead must do something custom. It must
enable this option in that case.
config SWAP_NONATOMIC
bool
help
On some architectures, the _Swap() primitive cannot be made
atomic with respect to the irq_lock being released. That
is, interrupts may be received between the entry to _Swap
and the completion of the context switch. There are a
handful of workaround cases in the kernel that need to be
enabled when this is true. Currently, this only happens on
ARM when the PendSV exception priority sits below that of
Zephyr-handled interrupts.
config ARCH_HAS_CUSTOM_BUSY_WAIT
bool
help
It's possible that an architecture port cannot or does not want to use
the provided k_busy_wait(), but instead must do something custom. It must
enable this option in that case.
config SYS_CLOCK_TICKS_PER_SEC
int "System tick frequency (in ticks/second)"
default 100 if QEMU_TARGET || SOC_POSIX
default 10000 if TICKLESS_KERNEL
default 100
help
This option specifies the nominal frequency of the system clock in Hz.
For asynchronous timekeeping, the kernel defines a "ticks" concept. A
"tick" is the internal count in which the kernel does all its internal
uptime and timeout bookkeeping. Interrupts are expected to be delivered
on tick boundaries to the extent practical, and no fractional ticks
are tracked.
The choice of tick rate is configurable by this option. Also the number
of cycles per tick should be chosen so that 1 millisecond is exactly
represented by an integral number of ticks. Defaults on most hardware
platforms (ones that support setting arbitrary interrupt timeouts) are
expected to be in the range of 10 kHz, with software emulation
platforms and legacy drivers using a more traditional 100 Hz value.
Note that when available and enabled, in "tickless" mode
this config variable specifies the minimum available timing
granularity, not necessarily the number or frequency of
interrupts delivered to the kernel.
A value of 0 completely disables timer support in the kernel.
config SYS_CLOCK_HW_CYCLES_PER_SEC
int "System clock's h/w timer frequency"
help
This option specifies the frequency of the hardware timer used for the
system clock (in Hz). This option is set by the SOC's or board's Kconfig file
and the user should generally avoid modifying it via the menu configuration.
config SYS_CLOCK_EXISTS
bool "System clock exists and is enabled"
default y
help
This option specifies that the kernel has timer support.
Some device configurations can eliminate significant code if
this is disabled. Obviously timeout-related APIs will not
work when disabled.
config TIMEOUT_64BIT
bool "Store kernel timeouts in 64 bit precision"
default y
help
When this option is true, the k_ticks_t values passed to
kernel APIs will be a 64 bit quantity, allowing the use of
larger values (and higher precision tick rates) without fear
of overflowing the 32 bit word. This feature also gates the
availability of absolute timeout values (which require the
extra precision).
config SYS_CLOCK_MAX_TIMEOUT_DAYS
int "Max timeout (in days) used in conversions"
default 365
help
Value is used in the time conversion static inline function to determine
at compile time which algorithm to use. One algorithm is faster, takes
less code but may overflow if multiplication of source and target
frequency exceeds 64 bits. Second algorithm prevents that. Faster
algorithm is selected for conversion if maximum timeout represented in
source frequency domain multiplied by target frequency fits in 64 bits.
config BUSYWAIT_CPU_LOOPS_PER_USEC
int "Number of CPU loops per microsecond for crude busy looping"
depends on !SYS_CLOCK_EXISTS && !ARCH_HAS_CUSTOM_BUSY_WAIT
default 500
help
Calibration for crude CPU based busy loop duration. The default
is assuming 1 GHz CPU and 2 cycles per loop. Reality is certainly
much worse but all we want here is a ball-park figure that ought
to be good enough for the purpose of being able to configure out
system timer support. If accuracy is very important then
implementing arch_busy_wait() should be considered.
config XIP
bool "Execute in place"
help
This option allows the kernel to operate with its text and read-only
sections residing in ROM (or similar read-only memory). Not all boards
support this option so it must be used with care; you must also
supply a linker command file when building your image. Enabling this
option increases both the code and data footprint of the image.
menu "Initialization Priorities"
config KERNEL_INIT_PRIORITY_OBJECTS
int "Kernel objects initialization priority"
default 30
help
Kernel objects use this priority for initialization. This
priority needs to be higher than minimal default initialization
priority.
config KERNEL_INIT_PRIORITY_DEFAULT
int "Default init priority"
default 40
help
Default minimal init priority for each init level.
config KERNEL_INIT_PRIORITY_DEVICE
int "Default init priority for device drivers"
default 50
help
Device driver, that depends on common components, such as
interrupt controller, but does not depend on other devices,
uses this init priority.
config APPLICATION_INIT_PRIORITY
int "Default init priority for application level drivers"
default 90
help
This priority level is for end-user drivers such as sensors and display
which have no inward dependencies.
endmenu
menu "Security Options"
config STACK_CANARIES
bool "Compiler stack canaries"
depends on ENTROPY_GENERATOR || TEST_RANDOM_GENERATOR
select NEED_LIBC_MEM_PARTITION if !STACK_CANARIES_TLS
help
This option enables compiler stack canaries.
If stack canaries are supported by the compiler, it will emit
extra code that inserts a canary value into the stack frame when
a function is entered and validates this value upon exit.
Stack corruption (such as that caused by buffer overflow) results
in a fatal error condition for the running entity.
Enabling this option can result in a significant increase
in footprint and an associated decrease in performance.
If stack canaries are not supported by the compiler an error
will occur at build time.
if STACK_CANARIES
config STACK_CANARIES_TLS
bool "Stack canaries using thread local storage"
depends on THREAD_LOCAL_STORAGE
depends on ARCH_HAS_STACK_CANARIES_TLS
help
This option enables compiler stack canaries on TLS.
Stack canaries will leave in the thread local storage and
each thread will have its own canary. This makes harder
to predict the canary location and value.
When enabled this causes an additional performance penalty
during thread creations because it needs a new random value
per thread.
endif
config EXECUTE_XOR_WRITE
bool "W^X for memory partitions"
depends on USERSPACE
depends on ARCH_HAS_EXECUTABLE_PAGE_BIT
default y
help
When enabled, will enforce that a writable page isn't executable
and vice versa. This might not be acceptable in all scenarios,
so this option is given for those unafraid of shooting themselves
in the foot.
If unsure, say Y.
config STACK_POINTER_RANDOM
int "Initial stack pointer randomization bounds"
depends on !STACK_GROWS_UP
depends on MULTITHREADING
depends on TEST_RANDOM_GENERATOR || ENTROPY_HAS_DRIVER
default 0
help
This option performs a limited form of Address Space Layout
Randomization by offsetting some random value to a thread's
initial stack pointer upon creation. This hinders some types of
security attacks by making the location of any given stack frame
non-deterministic.
This feature can waste up to the specified size in bytes the stack
region, which is carved out of the total size of the stack region.
A reasonable minimum value would be around 100 bytes if this can
be spared.
This is currently only implemented for systems whose stack pointers
grow towards lower memory addresses.
config BOUNDS_CHECK_BYPASS_MITIGATION
bool "Bounds check bypass mitigations for speculative execution"
depends on USERSPACE
help
Untrusted parameters from user mode may be used in system calls to
index arrays during speculative execution, also known as the Spectre
V1 vulnerability. When enabled, various macros defined in
misc/speculation.h will insert fence instructions or other appropriate
mitigations after bounds checking any array index parameters passed
in from untrusted sources (user mode threads). When disabled, these
macros do nothing.
endmenu
config MAX_DOMAIN_PARTITIONS
int "Maximum number of partitions per memory domain"
default 16
range 0 255
depends on USERSPACE
help
Configure the maximum number of partitions per memory domain.
config ARCH_MEM_DOMAIN_DATA
bool
depends on USERSPACE
help
This hidden option is selected by the target architecture if
architecture-specific data is needed on a per memory domain basis.
If so, the architecture defines a 'struct arch_mem_domain' which is
embedded within every struct k_mem_domain. The architecture
must also define the arch_mem_domain_init() function to set this up
when a memory domain is created.
Typical uses might be a set of page tables for that memory domain.
config ARCH_MEM_DOMAIN_SYNCHRONOUS_API
bool
depends on USERSPACE
help
This hidden option is selected by the target architecture if
modifying a memory domain's partitions at runtime, or changing
a memory domain's thread membership requires synchronous calls
into the architecture layer.
If enabled, the architecture layer must implement the following
APIs:
arch_mem_domain_thread_add
arch_mem_domain_thread_remove
arch_mem_domain_partition_remove
arch_mem_domain_partition_add
It's important to note that although supervisor threads can be
members of memory domains, they have no implications on supervisor
thread access to memory. Memory domain APIs may only be invoked from
supervisor mode.
For these reasons, on uniprocessor systems unless memory access
policy is managed in separate software constructions like page
tables, these APIs don't need to be implemented as the underlying
memory management hardware will be reprogrammed on context switch
anyway.
menu "SMP Options"
config SMP
bool "Symmetric multiprocessing support"
depends on USE_SWITCH
depends on !ATOMIC_OPERATIONS_C
help
When true, kernel will be built with SMP support, allowing
more than one CPU to schedule Zephyr tasks at a time.
config USE_SWITCH
bool "Use new-style _arch_switch instead of arch_swap"
depends on USE_SWITCH_SUPPORTED
help
The _arch_switch() API is a lower level context switching
primitive than the original arch_swap mechanism. It is required
for an SMP-aware scheduler, or if the architecture does not
provide arch_swap. In uniprocess situations where the
architecture provides both, _arch_switch incurs more somewhat
overhead and may be slower.
config USE_SWITCH_SUPPORTED
bool
help
Indicates whether _arch_switch() API is supported by the
currently enabled platform. This option should be selected by
platforms that implement it.
config SMP_BOOT_DELAY
bool "Delay booting secondary cores"
depends on SMP
help
By default Zephyr will boot all available CPUs during start up.
Select this option to skip this and allow architecture code boot
secondary CPUs at a later time.
config MP_NUM_CPUS
int "Number of CPUs/cores [DEPRECATED]"
default MP_MAX_NUM_CPUS
range 1 12
help
This is deprecated, please use MP_MAX_NUM_CPUS instead.
config MP_MAX_NUM_CPUS
int "Maximum number of CPUs/cores"
default 1
range 1 12
help
Maximum number of multiprocessing-capable cores available to the
multicpu API and SMP features.
config SCHED_IPI_SUPPORTED
bool
help
True if the architecture supports a call to
arch_sched_ipi() to broadcast an interrupt that will call
z_sched_ipi() on other CPUs in the system. Required for
k_thread_abort() to operate with reasonable latency
(otherwise we might have to wait for the other thread to
take an interrupt, which can be arbitrarily far in the
future).
config TRACE_SCHED_IPI
bool "Test IPI"
help
When true, it will add a hook into z_sched_ipi(), in order
to check if schedule IPI has called or not, for testing
purpose.
depends on SCHED_IPI_SUPPORTED
depends on MP_MAX_NUM_CPUS>1
config KERNEL_COHERENCE
bool "Place all shared data into coherent memory"
depends on ARCH_HAS_COHERENCE
default y if SMP && MP_MAX_NUM_CPUS > 1
select THREAD_STACK_INFO
help
When available and selected, the kernel will build in a mode
where all shared data is placed in multiprocessor-coherent
(generally "uncached") memory. Thread stacks will remain
cached, as will application memory declared with
__incoherent. This is intended for Zephyr SMP kernels
running on cache-incoherent architectures only. Note that
when this is selected, there is an implicit API change that
assumes cache coherence to any memory passed to the kernel.
Code that creates kernel data structures in uncached regions
may fail strangely. Some assertions exist to catch these
mistakes, but not all circumstances can be tested.
config TICKET_SPINLOCKS
bool "Ticket spinlocks for lock acquisition fairness [EXPERIMENTAL]"
select EXPERIMENTAL
help
Basic spinlock implementation is based on single
atomic variable and doesn't guarantee locking fairness
across multiple CPUs. It's even possible that single CPU
will win the contention every time which will result
in a live-lock.
Ticket spinlocks provide a FIFO order of lock acquisition
which resolves such unfairness issue at the cost of slightly
increased memory footprint.
endmenu
config TICKLESS_KERNEL
bool "Tickless kernel"
default y if TICKLESS_CAPABLE
depends on TICKLESS_CAPABLE
help
This option enables a fully event driven kernel. Periodic system
clock interrupt generation would be stopped at all times.
config TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE
bool
default y if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr" || "$(ZEPHYR_TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE)" = "y"
help
Hidden option to signal that toolchain supports generating code
with thread local storage.
config THREAD_LOCAL_STORAGE
bool "Thread Local Storage (TLS)"
depends on ARCH_HAS_THREAD_LOCAL_STORAGE && TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE
select NEED_LIBC_MEM_PARTITION if (CPU_CORTEX_M && USERSPACE)
help
This option enables thread local storage (TLS) support in kernel.
endmenu
menu "Device Options"
config DEVICE_DEPS
bool "Store device dependencies"
help
When enabled, device dependencies will be stored so that they can be
queried at runtime. Device dependencies are typically inferred from
devicetree. Enabling this option will increase ROM usage (or RAM if
dynamic device dependencies are enabled).
config DEVICE_DEPS_DYNAMIC
bool "Dynamic device dependencies"
depends on DEVICE_DEPS
help
Option that makes it possible to manipulate device dependencies at
runtime.
config DEVICE_MUTABLE
bool "Mutable devices [EXPERIMENTAL]"
select EXPERIMENTAL
help
Support mutable devices. Mutable devices are instantiated in SRAM
instead of Flash and are runtime modifiable in kernel mode.
endmenu
rsource "Kconfig.vm"