Anas Nashif | 6e95bde | 2024-02-23 07:24:52 -0500 | [diff] [blame] | 1 | # Copyright (c) 2024 Intel Corp. |
| 2 | # SPDX-License-Identifier: Apache-2.0 |
| 3 | # |
| 4 | menu "SMP Options" |
| 5 | |
| 6 | config SMP |
| 7 | bool "Symmetric multiprocessing support" |
| 8 | depends on USE_SWITCH |
| 9 | depends on !ATOMIC_OPERATIONS_C |
| 10 | help |
| 11 | When true, kernel will be built with SMP support, allowing |
| 12 | more than one CPU to schedule Zephyr tasks at a time. |
| 13 | |
| 14 | config USE_SWITCH |
| 15 | bool "Use new-style _arch_switch instead of arch_swap" |
| 16 | depends on USE_SWITCH_SUPPORTED |
| 17 | help |
| 18 | The _arch_switch() API is a lower level context switching |
| 19 | primitive than the original arch_swap mechanism. It is required |
| 20 | for an SMP-aware scheduler, or if the architecture does not |
| 21 | provide arch_swap. In uniprocess situations where the |
| 22 | architecture provides both, _arch_switch incurs more somewhat |
| 23 | overhead and may be slower. |
| 24 | |
| 25 | config USE_SWITCH_SUPPORTED |
| 26 | bool |
| 27 | help |
| 28 | Indicates whether _arch_switch() API is supported by the |
| 29 | currently enabled platform. This option should be selected by |
| 30 | platforms that implement it. |
| 31 | |
| 32 | config SMP_BOOT_DELAY |
| 33 | bool "Delay booting secondary cores" |
| 34 | depends on SMP |
| 35 | help |
| 36 | By default Zephyr will boot all available CPUs during start up. |
| 37 | Select this option to skip this and allow custom code |
| 38 | (architecture/SoC/board/application) to boot secondary CPUs at |
| 39 | a later time. |
| 40 | |
Anas Nashif | 6e95bde | 2024-02-23 07:24:52 -0500 | [diff] [blame] | 41 | config MP_MAX_NUM_CPUS |
| 42 | int "Maximum number of CPUs/cores" |
| 43 | default 1 |
| 44 | range 1 12 |
| 45 | help |
| 46 | Maximum number of multiprocessing-capable cores available to the |
| 47 | multicpu API and SMP features. |
| 48 | |
| 49 | config SCHED_IPI_SUPPORTED |
| 50 | bool |
| 51 | help |
Peter Mitsis | 0bcdae2 | 2024-03-04 10:52:24 -0500 | [diff] [blame] | 52 | True if the architecture supports a call to arch_sched_broadcast_ipi() |
| 53 | to broadcast an interrupt that will call z_sched_ipi() on other CPUs |
| 54 | in the system. Required for k_thread_abort() to operate with |
| 55 | reasonable latency (otherwise we might have to wait for the other |
| 56 | thread to take an interrupt, which can be arbitrarily far in the |
Anas Nashif | 6e95bde | 2024-02-23 07:24:52 -0500 | [diff] [blame] | 57 | future). |
| 58 | |
Peter Mitsis | ada3c90 | 2024-04-23 13:53:40 -0400 | [diff] [blame] | 59 | config SCHED_IPI_CASCADE |
| 60 | bool "Use cascading IPIs to correct localized scheduling" |
| 61 | depends on SCHED_CPU_MASK && !SCHED_CPU_MASK_PIN_ONLY |
| 62 | default n |
| 63 | help |
| 64 | Threads that are preempted by a local thread (a thread that is |
| 65 | restricted by its CPU mask to execute on a subset of all CPUs) may |
| 66 | trigger additional IPIs when the preempted thread is of higher |
| 67 | priority than a currently executing thread on another CPU. Although |
| 68 | these cascading IPIs will ensure that the system will settle upon a |
| 69 | valid set of high priority threads, it comes at a performance cost. |
| 70 | |
Anas Nashif | 6e95bde | 2024-02-23 07:24:52 -0500 | [diff] [blame] | 71 | config TRACE_SCHED_IPI |
| 72 | bool "Test IPI" |
| 73 | help |
| 74 | When true, it will add a hook into z_sched_ipi(), in order |
| 75 | to check if schedule IPI has called or not, for testing |
| 76 | purpose. |
| 77 | depends on SCHED_IPI_SUPPORTED |
| 78 | depends on MP_MAX_NUM_CPUS>1 |
| 79 | |
Peter Mitsis | d8a4c8a | 2024-02-16 13:54:47 -0500 | [diff] [blame] | 80 | config IPI_OPTIMIZE |
| 81 | bool "Optimize IPI delivery" |
| 82 | default n |
| 83 | depends on SCHED_IPI_SUPPORTED && MP_MAX_NUM_CPUS>1 |
| 84 | help |
| 85 | When selected, the kernel will attempt to determine the minimum |
| 86 | set of CPUs that need an IPI to trigger a reschedule in response to |
| 87 | a thread newly made ready for execution. This increases the |
| 88 | computation required at every scheduler operation by a value that is |
| 89 | O(N) in the number of CPUs, and in exchange reduces the number of |
| 90 | interrupts delivered. Which to choose is going to depend on |
| 91 | application behavior. If the architecture also supports directing |
Pisit Sawangvonganan | 5ed3cd4 | 2024-07-06 01:12:07 +0700 | [diff] [blame] | 92 | IPIs to specific CPUs then this has the potential to significantly |
Peter Mitsis | d8a4c8a | 2024-02-16 13:54:47 -0500 | [diff] [blame] | 93 | reduce the number of IPIs (and consequently ISRs) processed by the |
| 94 | system as the number of CPUs increases. If not, the only benefit |
| 95 | would be to not issue any IPIs if the newly readied thread is of |
| 96 | lower priority than all the threads currently executing on other CPUs. |
| 97 | |
Anas Nashif | 6e95bde | 2024-02-23 07:24:52 -0500 | [diff] [blame] | 98 | config KERNEL_COHERENCE |
| 99 | bool "Place all shared data into coherent memory" |
| 100 | depends on ARCH_HAS_COHERENCE |
| 101 | default y if SMP && MP_MAX_NUM_CPUS > 1 |
| 102 | select THREAD_STACK_INFO |
| 103 | help |
| 104 | When available and selected, the kernel will build in a mode |
| 105 | where all shared data is placed in multiprocessor-coherent |
| 106 | (generally "uncached") memory. Thread stacks will remain |
| 107 | cached, as will application memory declared with |
| 108 | __incoherent. This is intended for Zephyr SMP kernels |
| 109 | running on cache-incoherent architectures only. Note that |
| 110 | when this is selected, there is an implicit API change that |
| 111 | assumes cache coherence to any memory passed to the kernel. |
| 112 | Code that creates kernel data structures in uncached regions |
| 113 | may fail strangely. Some assertions exist to catch these |
| 114 | mistakes, but not all circumstances can be tested. |
| 115 | |
| 116 | config TICKET_SPINLOCKS |
| 117 | bool "Ticket spinlocks for lock acquisition fairness [EXPERIMENTAL]" |
| 118 | select EXPERIMENTAL |
| 119 | help |
| 120 | Basic spinlock implementation is based on single |
| 121 | atomic variable and doesn't guarantee locking fairness |
| 122 | across multiple CPUs. It's even possible that single CPU |
| 123 | will win the contention every time which will result |
| 124 | in a live-lock. |
| 125 | Ticket spinlocks provide a FIFO order of lock acquisition |
| 126 | which resolves such unfairness issue at the cost of slightly |
| 127 | increased memory footprint. |
| 128 | |
| 129 | endmenu |