Verified that each warning generated by -Wcast-align is indeed a false positive
and used an (intermediate) cast to `uintptr_t` to silence the warnings.
container_of() is safe to use in regard to alignment requirements, when used
correctly. Using `uintptr_t` instead of `char *` for applying the offset results
in -Wcast-align not complaining.
Separate thread names from DEVELHELP so thread names can be
enabled in non-development/debug builds when required/desired.
THREAD_NAMES will be enabled by default then DEVELHELP is set to 1.
Allocate and initialize a thread-local block for each thread at the
top of the stack.
Set the tls base when switching to a new thread.
Add tdata/tbss linker instructions to cortex_m and risc-v scripts.
Signed-off-by: Keith Packard <keithp@keithp.com>
---
v2:
Squash fixes
v3:
Replace tabs with spaces
v4:
Add tbss to fe310 linker script
- Add `byteorder_bebuftohll()` to read an 64 bit value from a big endian buffer
- Add `byteorder_htobebufll()` to write an 64 bit value into a big endian buffer
This commit reverses the runqueue_cache bit order when the architecture
has a CLZ (count leading zeros) instruction. When the architecture
supports CLZ, it is faster to determine the most significant set bit of
a word than to determine the least significant bit set. Unfortunately
when the instruction is not available, it is more efficient to determine
the least significant bit set.
Reversing the bit order shaves off another 4 cycles on the same54-xpro.
From 147 to 143 ticks when testing with tests/bench_sched_nop.
Architectures where no CLZ instruction is available are not affected.
Apparently clang doesn't like static variables / functions being accessed called
from inline function (-Wstatic-in-inline). This commit results in the same
binary being generated while making clang happy.
Replace accesses to `sched_active_thread`, `sched_active_pid`, and
`sched_threads` with `thread_get_active()`, `thread_get_active_pid()`, and
`thread_get_unchecked()` where sensible.
- Add `thread_get_active()` to access the TCB
- Add `thread_get_unchecked()` as fast alternative to `thread_get()`
- Drop `volatile` qualifier in `thread_get()`
- Right now every caller of this function does this. It is better to
contain this undefined behavior to at least one place in code
The `clz` instruction pretty much implements getting the most significant bit
in hardware, so use it instead of the software implementation.
This reults in both a reduction in code size as in a speedup:
master:
text data bss dec hex filename
14816 136 2424 17376 43e0 tests/bitarithm_timings/bin/same54-xpro/tests_bitarithm_timings.elf
+ bitarithm_msb: 3529411 iterations per second
this patch:
text data bss dec hex filename
14768 136 2424 17328 43b0 tests/bitarithm_timings/bin/same54-xpro/tests_bitarithm_timings.elf
+ bitarithm_msb: 9230761 iterations per second
Big endian buffers on big endian systems are already in big endian byte order,
so no byte shuffling is needed. However, byte buffers might be unaligned, so
copy operations that are safe with unaligned memory accesses need to be
used.
It can be desirable to not have the boot message printed each time
(e.g. logs are transferred over a wireless link on battery) while
still retaining the ability to receive INFO level logs.
This adds the option to disable the boot-up message (and also to customize
it if that is desireable).
An interrupt serviced during the idle sleep can re-request a context
switch while the scheduler is already going to switch contexts after the
idle sleep. Thi sched_context_switch_request should thus be cleared
after the idle sleep and not before where it could be modified during
the idle sleep and get out of sync.
In the case that the no_thread_idle feature is active, the
runqueue_bitcache is checked twice in the case no thread is available to
schedule. This changes the inner while loop to a do-while loop to save
one check from the initial loop iteration, saving a cycle or so in the
idle case.