blob: 33bc58a311ac38d0b7e5e1c375d3bbedb6fa3092 [file] [log] [blame]
Anas Nashif6e95bde2024-02-23 07:24:52 -05001# Copyright (c) 2024 Intel Corp.
2# SPDX-License-Identifier: Apache-2.0
3#
4menu "SMP Options"
5
6config SMP
7 bool "Symmetric multiprocessing support"
8 depends on USE_SWITCH
9 depends on !ATOMIC_OPERATIONS_C
10 help
11 When true, kernel will be built with SMP support, allowing
12 more than one CPU to schedule Zephyr tasks at a time.
13
14config USE_SWITCH
15 bool "Use new-style _arch_switch instead of arch_swap"
16 depends on USE_SWITCH_SUPPORTED
17 help
18 The _arch_switch() API is a lower level context switching
19 primitive than the original arch_swap mechanism. It is required
20 for an SMP-aware scheduler, or if the architecture does not
21 provide arch_swap. In uniprocess situations where the
22 architecture provides both, _arch_switch incurs more somewhat
23 overhead and may be slower.
24
25config USE_SWITCH_SUPPORTED
26 bool
27 help
28 Indicates whether _arch_switch() API is supported by the
29 currently enabled platform. This option should be selected by
30 platforms that implement it.
31
32config SMP_BOOT_DELAY
33 bool "Delay booting secondary cores"
34 depends on SMP
35 help
36 By default Zephyr will boot all available CPUs during start up.
37 Select this option to skip this and allow custom code
38 (architecture/SoC/board/application) to boot secondary CPUs at
39 a later time.
40
Anas Nashif6e95bde2024-02-23 07:24:52 -050041config MP_MAX_NUM_CPUS
42 int "Maximum number of CPUs/cores"
43 default 1
44 range 1 12
45 help
46 Maximum number of multiprocessing-capable cores available to the
47 multicpu API and SMP features.
48
49config SCHED_IPI_SUPPORTED
50 bool
51 help
Peter Mitsis0bcdae22024-03-04 10:52:24 -050052 True if the architecture supports a call to arch_sched_broadcast_ipi()
53 to broadcast an interrupt that will call z_sched_ipi() on other CPUs
54 in the system. Required for k_thread_abort() to operate with
55 reasonable latency (otherwise we might have to wait for the other
56 thread to take an interrupt, which can be arbitrarily far in the
Anas Nashif6e95bde2024-02-23 07:24:52 -050057 future).
58
Peter Mitsisada3c902024-04-23 13:53:40 -040059config SCHED_IPI_CASCADE
60 bool "Use cascading IPIs to correct localized scheduling"
61 depends on SCHED_CPU_MASK && !SCHED_CPU_MASK_PIN_ONLY
62 default n
63 help
64 Threads that are preempted by a local thread (a thread that is
65 restricted by its CPU mask to execute on a subset of all CPUs) may
66 trigger additional IPIs when the preempted thread is of higher
67 priority than a currently executing thread on another CPU. Although
68 these cascading IPIs will ensure that the system will settle upon a
69 valid set of high priority threads, it comes at a performance cost.
70
Anas Nashif6e95bde2024-02-23 07:24:52 -050071config TRACE_SCHED_IPI
72 bool "Test IPI"
73 help
74 When true, it will add a hook into z_sched_ipi(), in order
75 to check if schedule IPI has called or not, for testing
76 purpose.
77 depends on SCHED_IPI_SUPPORTED
78 depends on MP_MAX_NUM_CPUS>1
79
Peter Mitsisd8a4c8a2024-02-16 13:54:47 -050080config IPI_OPTIMIZE
81 bool "Optimize IPI delivery"
82 default n
83 depends on SCHED_IPI_SUPPORTED && MP_MAX_NUM_CPUS>1
84 help
85 When selected, the kernel will attempt to determine the minimum
86 set of CPUs that need an IPI to trigger a reschedule in response to
87 a thread newly made ready for execution. This increases the
88 computation required at every scheduler operation by a value that is
89 O(N) in the number of CPUs, and in exchange reduces the number of
90 interrupts delivered. Which to choose is going to depend on
91 application behavior. If the architecture also supports directing
Pisit Sawangvonganan5ed3cd42024-07-06 01:12:07 +070092 IPIs to specific CPUs then this has the potential to significantly
Peter Mitsisd8a4c8a2024-02-16 13:54:47 -050093 reduce the number of IPIs (and consequently ISRs) processed by the
94 system as the number of CPUs increases. If not, the only benefit
95 would be to not issue any IPIs if the newly readied thread is of
96 lower priority than all the threads currently executing on other CPUs.
97
Anas Nashif6e95bde2024-02-23 07:24:52 -050098config KERNEL_COHERENCE
99 bool "Place all shared data into coherent memory"
100 depends on ARCH_HAS_COHERENCE
101 default y if SMP && MP_MAX_NUM_CPUS > 1
102 select THREAD_STACK_INFO
103 help
104 When available and selected, the kernel will build in a mode
105 where all shared data is placed in multiprocessor-coherent
106 (generally "uncached") memory. Thread stacks will remain
107 cached, as will application memory declared with
108 __incoherent. This is intended for Zephyr SMP kernels
109 running on cache-incoherent architectures only. Note that
110 when this is selected, there is an implicit API change that
111 assumes cache coherence to any memory passed to the kernel.
112 Code that creates kernel data structures in uncached regions
113 may fail strangely. Some assertions exist to catch these
114 mistakes, but not all circumstances can be tested.
115
116config TICKET_SPINLOCKS
117 bool "Ticket spinlocks for lock acquisition fairness [EXPERIMENTAL]"
118 select EXPERIMENTAL
119 help
120 Basic spinlock implementation is based on single
121 atomic variable and doesn't guarantee locking fairness
122 across multiple CPUs. It's even possible that single CPU
123 will win the contention every time which will result
124 in a live-lock.
125 Ticket spinlocks provide a FIFO order of lock acquisition
126 which resolves such unfairness issue at the cost of slightly
127 increased memory footprint.
128
129endmenu