The stm32_eth driver was build on top of the internal API periph_eth, which
was unused anywhere. (Additionally, with two obscure exceptions, no functions
where declared in headers, making them pretty hard to use anyway.)
The separation of the driver into two layers incurs overhead, but does not
result in cleaner structure or reuse of code. Thus, this artificial separation
was dropped.
The Ethernet DMA is capable of collecting a frame from multiple chunks, just
like the send function of the netdev interface passes. The send function was
rewritten to just set up the Ethernet DMA up to collect the outgoing frame
while sending. As a result, the send function blocks until the frame is
sent to keep control over the buffers.
This frees 6 KiB of RAM previously used for TX buffers.
1. Move buffer configuration from boards to cpu/stm32
2. Allow overwriting buffer configuration
- If the default configuration ever needs touching, this will be due to a
use case and should be done by the application rather than the board
3. Reduce default RX buffer size
- Now that handling of frames split up into multiple DMA descriptors works,
we can make use of this
Note: With the significantly smaller RX buffers the driver will now perform
much worse when receiving data at maximum throughput. But as long as frames
are small (which is to be expected for IoT or boarder gateway scenarios) the
performance should not be affected.
If any incoming frame is bigger than a single DMA buffer, the Ethernet DMA will
split the content and use multiple DMA buffers instead. But only the DMA
descriptor of the last Ethernet frame segment will contain the frame length.
Previously, the frame length calculation, reassembly of the frame, and the
freeing of DMA descriptors was completely broken and only worked in case the
received frame was small enough to fit into one DMA buffer. This is now fixed,
so that smaller DMA buffers can safely be used now.
Additionally the interface was simplified: Previously two receive flavors were
implemented, with only one ever being used. None of those function was
public due to missing declarations in headers. The unused interface was
dropped and the remaining was streamlined to better fit the use case.
Either nRF52810 should define SPIM_COUNT 2 or nRF52805 should
define SPIM_COUNT 1.
But as it nRF52805 defines SPIM_COUNT 2 and nRF52810 defines SPIM_COUNT 1
even though both have a single SPI and a single, separate TWI peripheral.
Re-define SPIM_COUNT to 2 on nRF52810 as this is the easiest solution.
The interval load value was only set to 0xffff regardless of the counter
mode used which mad the 32bit timer apparently stop after 0xffff (it
would never reach values >0xffff).
When a GPTM is configured to one of the 32-bit modes, TAILR appears as a
32-bit register (the upper 16-bits correspond to the contents of the
GPTM Timer B Interval Load (TBILR) register). In a 16-bit mode, the
upper 16 bits of this register read as 0s and have no effect on the
state of TBILR.
Thsi commit set the correct value for TAILR depending on the configured
timer mode.
Seems like the Interrupt flag for a Capture/Compare channel gets set when
- the CC-value is reached
- the timer resets before the CC value is reached.
We only want the first event and ignore the second one. Unfortunately I did
not find a way to disable the second event type, so it is filtered in software.
That is we need to
- ignore the CC-interrupts when the COUNT register register is reset
- ignore the CC-interrupts > TOP value/ARR (auto-reload register)
> A bit-band region maps each word in a bit-band alias region to a single bit in the bit-band region.
> The bit-band regions occupy the lowest 1 MB of the SRAM and peripheral memory regions. A
https://www.mouser.com/datasheet/2/405/lm4f120h5qr-124014.pdf
> Bit-banding is supported in order to reduce the execution time for
> read-modify-write (RMW) operations to memory.
> With bit-banding, certain regions in the memory map
> (SRAM and peripheral space) can use address aliases to access
> individual bits in one atomic operation.
https://www.ti.com/lit/ug/swcu117i/swcu117i.pdf
> Bit-banding is supported in order to reduce the execution time for
> read-modify-write (RMW) operations to memory.
> With bit-banding, certain regions in the memory map
> (SRAM and peripheral space) can use address aliases to access
> individual bits in one atomic operation.
https://www.ti.com/lit/ug/swcu185d/swcu185d.pdf
If we disable an external interrupt, GPIO events that would generate an interrupt will still set the interrupt flag.
That means once we enable the interrupt again, a stale interrupt will be triggered.
This is surprising and probably not what the user wants, unfortunately the API documentation is not very clear about what to expect.
There is however no way to drop those intermediate interrupts with the current API.
Ignoring the events that occurred while the GPIO interrupt were disabled is probably the right (and expected) thing to.
Both tests/pthread_tls and tests/prng_sha256prng fail without this, but
other platforms run fine with their defaults. Lets consider the higher
value a better default.
Previously the setting the alarm would overwrite the overflow callback
and vice versa.
Since we can only set one alarm in hardware, always set the alarm to the
closest event of the two.
Move common code into helper functions and extract the commands
that differ between normal and RWWEE page reading / writing.
This cuts down on `#ifdef` use.
The RTC and RTT share the same peripheral, so they can also
share the same code.
This is needed to integrate the Tamper Detection into common
RTC/RTT code.
Since former ESP32 toolchain versions used POSIX threads, module `pthread` was required. The built-in `cxa_ctor_guards` had to be replaced since they used the `pthread_once` function for singleton objects initialization where the parameter `once` was of incompatible type with that provided by RIOT's `pthread` module. The current ESP32 toolchain version no longer uses POSIX threads. The dependency on module `pthread` as well as according C++ hacks can be removed.
We don't need to read-modify-write the CTRLA register to disable
the UART.
The entire CTRLA register is re-written just a few lines below, so
we can just set it to 0 to disable the UART.
There is also no need to reset the UART since we re-write all config
registers in init.
If a timer triggers while the idle thread is running, previously a stack
overflow was triggered. This commit increases the idle threads stack size if
xtimer is used.
- Added missing wait for TX flush
- Grouped access to the same registers of the Ethernet PHY to reduce accesses.
(The compiler won't optimize accesses to `volatile`, as defined in the C
standard.)
- Add missing `volatile` to DMA descriptor, as memory is also accessed by the
DMA without knowledge of the compiler
- Dropped `__attribute__((packed))` from DMA descriptor
- The DMA descriptor fields need to be aligned on word boundries to
properly function
- The compiler can now more efficiently access the fields (safes ~300 B ROM)
- Moved the DMA descriptor struct and the flags to `periph_cpu.h`
- This allows Doxygen documentation being build for it
- Those types and fields are needed for a future PTP implementation
- Renamed DMA descriptor flags
- They now reflect to which field in the DMA descriptor they refer to, so
that confusion is avoided
- Added documentation to the DMA descriptor and the corresponding flags
The Atmel I2C peripheral supports arbitrary I2C frequencies.
Since the `i2c_speed_t` enum just encodes the raw frequency values,
we can just use them in the peripheral definition.
We just have to remove the switch-case block that will generate an error
for values outside of `i2c_speed_t`.