Adding `USEMODULE += core_mutex_debug` to your `Makefile` results in
on log messages such as
[mutex] waiting for thread 1 (pc = 0x800024d)
being added whenever `mutex_lock()` blocks. This makes tracing down
deadlocks easier.
This restores a pre-existing design decision to implement both
blocking and non-blocking mutex locking with the same code. Those
implementations have been split prior to the introduction of
the `core_mutex_priority_inheritance` module when `mutex_trylock()`
indeed was trivial. This decision didn't age well, so undo it.
This fixes https://github.com/RIOT-OS/RIOT/issues/18545 as the code
previously relied on `sched_change_priority()` not directly scheduling
a new thread while IRQs are disabled, but rather later when IRQs are
restored. This is true for Cortex-M MCUs (where the PendSV IRQ is used
to trigger the scheduler), but not e.g. for AVR.
This is intended for the bootloader module where we don't enter thread
mode, so mutex must never attempt to switch context.
Instead use a simple busy wait that is enough to make the possible mutex
users (e.g. interrupt based SPI) in bootloader mode work.
Due to limited compatibility with C, we cannot use the inline mutex_trylock
implementation for C++. Instead, we provide a mutex_trylock_ffi() intended for
foreign function interfaces. This should also benefit rust users.
Add a version of `mutex_lock()` that can be canceled with the obvious name
`mutex_lock_cancelable()`. This function returns `0` on success, and
`-ECANCELED` when the calling thread was unblocked via a call to
`mutex_cancel()` (and hence without obtaining the mutex).
This is intended to simplify the implementation of `xtimer_mutex_lock_timeout()`
and to implement `ztimer_mutex_lock_timeout()`.
- Split out handling of the blocking code path of mutex_lock() into a static
`_block()` function. This improves readability a bit and will ease review of
a follow up PR.
- Return `void` instead of `int`.
- Use static inline function for `mutex_try_lock()`
- The implementation is trivial enough with the inline-able IRQ API to just
always be inline-ed
- Rename `_mutex_lock()` to `mutex_lock()` and drop the blocking parameter
- This was possible to the stand-alone `mutex_try_lock()` implementation
- This yields a measurable performance bump
Replace accesses to `sched_active_thread`, `sched_active_pid`, and
`sched_threads` with `thread_get_active()`, `thread_get_active_pid()`, and
`thread_get_unchecked()` where sensible.
Fixes#1708.
Currently involuntary preemption causes the current thread not only to
yield for a higher prioritized thread, but all other threads of its own
priority class, too.
This PR adds the function `thread_yield_higher()`, which will yield the
current thread in favor of higher prioritized functions, but not for
threads of its own priority class.
Boards now need to implement `thread_yield_higher()` instead of
`thread_yield()`, but `COREIF_NG` boards are not affected in any way.
`thread_yield()` retains its old meaning: yield for every thread that
has the same or a higher priority.
This PR does not touch the occurrences of `thread_yield()` in the periph
drivers, because the author of this PR did not look into the logic of
the various driver implementations.
Instead of using differing integer types use kernel_pid_t for process
identifier. This type is introduced in a new header file to avoid
circular dependencies.
- Included a collection of cpu-dependent headers in core/include/arch
- Extracted all interfaces that need to be implemented for a cpu
- Created a mapping between those interfaces and the old ones
- added flag for disabling arch interface
- added missing state to lpm_arch interface
- added arch interface for reboot
- fixed newline issues that were pointed out
- documentation fixes to cpu-core interface