We can achieve greater accuracy for the relative timer_set()
if we don't use the generic implementation.
Use the same approach as used by atmega_common to trigger interrupts
for too small offsets.
tests/periph_timer_short_relative_set should now succeed for all intervals.
In most places, picolibc and newlib are the same, so use
the existing newlib code when compiling with picolibc.
Signed-off-by: Keith Packard <keithp@keithp.com>
Allocate and initialize a thread-local block for each thread at the
top of the stack.
Set the tls base when switching to a new thread.
Add tdata/tbss linker instructions to cortex_m and risc-v scripts.
Signed-off-by: Keith Packard <keithp@keithp.com>
---
v2:
Squash fixes
v3:
Replace tabs with spaces
v4:
Add tbss to fe310 linker script
Disable the newlib-nano stubs code when picolibc is in use
Signed-off-by: Keith Packard <keithp@keithp.com>
---
v2:
Squash fixes in
v3:
call stdio_init in _PICOLIBC_ mode to initialize uart
v3:
Remove call to stdio_init from nanostubs_init, always
call from cpu_init.
Picolibc makes atexit state per-thread instead of global, so we can't
register destructors with atexit in a non-thread context as we won't
have any TLS space initialized.
Signed-off-by: Keith Packard <keithp@keithp.com>
Support for picolibc as alternative libc implementation is added with
this commit. For now only cortex-m CPU's are supported.
Enable via PICOLIBC=1
---
v2:
squash fixes in
v3:
Remove picolibc integer printf/scanf stuff from sys/Makefile.include,
it gets set in makefiles/libc/picolibc.mk
fixup for dependency
The MPU on the cortex-m23 has some differences with the MPU on the older
cortex-m devices. It is not implemented in the cortex-m MPU driver. This
removes the available feature as it gives a false sense of security by
advertising the feature, but implementing it with noop's
This adds a placeholder define for when the DMA peripheral available on
the MCU doesn't support channel/trigger filtering. This is the case on
the stm32f1 and stm32f3 family.
The stm32_eth driver was build on top of the internal API periph_eth, which
was unused anywhere. (Additionally, with two obscure exceptions, no functions
where declared in headers, making them pretty hard to use anyway.)
The separation of the driver into two layers incurs overhead, but does not
result in cleaner structure or reuse of code. Thus, this artificial separation
was dropped.
The Ethernet DMA is capable of collecting a frame from multiple chunks, just
like the send function of the netdev interface passes. The send function was
rewritten to just set up the Ethernet DMA up to collect the outgoing frame
while sending. As a result, the send function blocks until the frame is
sent to keep control over the buffers.
This frees 6 KiB of RAM previously used for TX buffers.
1. Move buffer configuration from boards to cpu/stm32
2. Allow overwriting buffer configuration
- If the default configuration ever needs touching, this will be due to a
use case and should be done by the application rather than the board
3. Reduce default RX buffer size
- Now that handling of frames split up into multiple DMA descriptors works,
we can make use of this
Note: With the significantly smaller RX buffers the driver will now perform
much worse when receiving data at maximum throughput. But as long as frames
are small (which is to be expected for IoT or boarder gateway scenarios) the
performance should not be affected.
If any incoming frame is bigger than a single DMA buffer, the Ethernet DMA will
split the content and use multiple DMA buffers instead. But only the DMA
descriptor of the last Ethernet frame segment will contain the frame length.
Previously, the frame length calculation, reassembly of the frame, and the
freeing of DMA descriptors was completely broken and only worked in case the
received frame was small enough to fit into one DMA buffer. This is now fixed,
so that smaller DMA buffers can safely be used now.
Additionally the interface was simplified: Previously two receive flavors were
implemented, with only one ever being used. None of those function was
public due to missing declarations in headers. The unused interface was
dropped and the remaining was streamlined to better fit the use case.
Either nRF52810 should define SPIM_COUNT 2 or nRF52805 should
define SPIM_COUNT 1.
But as it nRF52805 defines SPIM_COUNT 2 and nRF52810 defines SPIM_COUNT 1
even though both have a single SPI and a single, separate TWI peripheral.
Re-define SPIM_COUNT to 2 on nRF52810 as this is the easiest solution.
The interval load value was only set to 0xffff regardless of the counter
mode used which mad the 32bit timer apparently stop after 0xffff (it
would never reach values >0xffff).
When a GPTM is configured to one of the 32-bit modes, TAILR appears as a
32-bit register (the upper 16-bits correspond to the contents of the
GPTM Timer B Interval Load (TBILR) register). In a 16-bit mode, the
upper 16 bits of this register read as 0s and have no effect on the
state of TBILR.
Thsi commit set the correct value for TAILR depending on the configured
timer mode.
Seems like the Interrupt flag for a Capture/Compare channel gets set when
- the CC-value is reached
- the timer resets before the CC value is reached.
We only want the first event and ignore the second one. Unfortunately I did
not find a way to disable the second event type, so it is filtered in software.
That is we need to
- ignore the CC-interrupts when the COUNT register register is reset
- ignore the CC-interrupts > TOP value/ARR (auto-reload register)