Quantcast
Channel: Xilinx Wiki : Xilinx Wiki - all changes
Viewing all 11776 articles
Browse latest View live

Linux AXI Ethernet driver

$
0
0
...
NAPI support
Known Issues and LimitationsKnown Issues and Limitations
---> The current driver assumes that Ethernet IP is connected to the DMA at the hardware level.
---> The current driver contains the Axi DMA related code.
Probe Fail:
---> In the Menu config if AXI DMA driver got selected then Ethernet driver probe will fail with the below
or similar error
ERROR:
<span style="color: #333333; font-size: 14px;">xilinx_axienet 40c00000.ethernet: can't request region for resource [mem 0x41e00000-0x41e0ffff]
xilinx_axienet: probe of 40c00000.ethernet failed with error -16</span>

Testing Tools
Testing Tools-Diagnostic and Protocol TestsDiagnostic and Protocol Tests
...
./iperf -c <Server IP>
Work tobe done
---> Factor out Axi DMA code and use the AXI DMA Linux driver.
---> Add AXI FIFO Support.

Related Links
http://www.wiki.xilinx.com/Linux+Drivers

Linux AXI Ethernet driver

$
0
0
...
or similar error
ERROR:
<span style="color: #333333; font-size: 14px;">xilinx_axienetxilinx_axienet 40c00000.ethernet: can't
...
with error -16</span>-16
Testing Tools
Testing Tools-Diagnostic and Protocol TestsDiagnostic and Protocol Tests

Linux Drivers

$
0
0
...
xps_ll_temac
Emacps Driver
.Axi Ethernet Driver
.
EmacLite Driver

Linux AXI Ethernet driver

$
0
0

Introduction
...
overview of axiAxi Ethernet linuxLinux driver which
Kernel Configuration Options
...
build the emacpsAxi Ethernet driver
CONFIG_ETHERNET
CONFIG_NET_VENDOR_XILINX
...
};
Features supported
Support for EthToolethtool queries
NAPI supportsupport.
Known Issues and LimitationsKnown Issues and Limitations
...
assumes that Axi Ethernet IP
---> The current driver contains the Axi DMA related code.
Probe Fail:
...
In the Kernel Menu config if the AXI DMA
or similar error
ERROR:

Linux Drivers

$
0
0
...
Yes
Yes
arch/arm/common/gic.cdrivers/irqchip/irq-gic.c
arch/microblaze/kernel/intc.c
L2 Cache Controller (PL310)

EthBlockDiagram1.png

EthernetDataFlow.png

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
This techtip describes the challenges in achieving the best Ethernet performance and best design practices to achieve the better performance using the Zynq-7000 AP SoC. This techtip explains briefly on the various solutions available for the achieving the better performance using the Zynq-7000 AP SoC, steps to re-create, compile and run the design where ever it is possible. This paper also explains various ways to implement the TCP/IP protocols and discusses the advantages on each implementations like TCP/IP offload engine, software implementations of stack like lwIP and Linux Ethernet sub-system.
Zynq-7000 AP SoC has an in-built dual Giga bit Ethernet controllers which can support 10/100/1000 Mb/s EMAC configurations compatible with the IEEE 802.3-2008 standard. The Programming Logic (PL) sub system of the Zynq-7000 AP SoC can also be configured with additional soft AXI EMAC controllers if the end application requires more than two Giga bit Ethernet Controller. Following is the example block diagram of the Zynq-7000 AP SoC with GEMACs using the ZC706 Development board
{EthBlockDiagram1.png} Figure 1: Gigabit Ethernet Design block diagram using Zynq-7000 AP SoC
Above example scenario shows all the possible gigabit Ethernet MAC configurations using the ZC706 board.
PS-GEM0 is connected to the Marvell PHY through the reduced gigabit media independent interface (RGMII), which is the default setup for the ZC706 board.
...
The data received by the controller is written to pre-allocated buffer descriptors in system memory. These buffer descriptor entries are listed in the receive buffer queue. The Receive-buffer Queue Pointer register of the Ethernet DMA points to this data structure on initialization and uses it to continuously and sequentially to copy the Ethernet packet received in the Ethernet FIFO to Memory address specified in the receive buffer queue
Rx Ring buffers and Tx Ring buffers location can be in DDR or OCM and access latencies of these memories, the speed at which the instructions executes for packet processing will also improves the overall performance
{EthernetDataFlow.png} Figure 2: Ethernet Data movement in Zynq-7000 AP SoC
When an Ethernet Packet is received by the MAC, the Ethernet DMA uses the address in the RX Buffer descriptor to push the packet that has been buffered in the Packet Buffer on Ethernet interface to DDR3 memory, via the central interconnects.
Data Receive Path: ETH0 -> ETH0 DMA (32-bit) -> Central Interconnect -> DDR3 Memory Controller (64-bit AXI).

LinuxTCPIPSWImplemenation.png

lwIPPerfSettings1.png

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
Linux Networking SW TCP/IP stack implementation
The TCP/IP or UDP/IP protocol implementation also plays a major role in overall Ethernet performance. Following is the Linux SW stack implementation
{LinuxTCPIPSWImplemenation.png} Figure 3: Linux TCP/IP SW stack implementation
Though the Linux kernel is based on monolithic architecture and works on sys call interface which involves the mode switches between user and kernel, it tries to optimize the system where ever it is possible to do so. Following are the few techniques used to achieve better performance
Memory allocation is a key factor in the performance of any TCP/IP stack. Most other TCP/IP implementations have a memory implementation mechanism that is independent from the OS. However, the Linux implementation took a different approach by using the slab cache method, which is used for other internal kernel allocation and this method has been adapted for socket buffers. With slab allocation, memory chunks suitable to fit data objects of certain type or size are pre-allocated.
...
Jumbo frames are used in high data intensive applications. The packet size is 16384 bytes. The larger frame size improves the performance by reducing the number of the fragments for a given data size.
The XAPP-1082 provides details on how to use the jumbo frames support available in AXI EMAC for improved performance.
...
provided with XAPP-1082XAPP1082 using the
...
development kit:
{EthBlockDiagram1.png} Figure 4: Block diagram of the design implemented as part of XAPP1082

The PS GEM1 and PL AXI Ethernet shares the 1000Base-X PHY so only either PSGEM1 or PLAXI Ethernet can be used at given point of time.
The complete design details and design files can be obtained from XAPP1082
...
The complete design details and design files can be obtained from XAPP1026 XAPP1026 (http://www.xilinx.com/support/documentation/application_notes/xapp1026.pdf)
As explained in the above appnote the lwIP TCP/IP stack is available for the designers as library as part of the SDK. To achieve the better performance designers can choose the following options of the lwIP library in SDK settings. lwIP TCP/IP performance settings for better performance:
{lwIPPerfSettings1.png} Figure 5: lwIP TCP/IP performance settings
If xilkernel RTOS is used in the design then following options best suits to achieve the better performance
TCP/IP Offload engines: (TOEs)

lwIPPerfSettings2.png

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
[root@linux /]#echo 01 > /proc/irq/19/smp_affinity
To share the load between the two Cortex A9s Taskset2 utility can be used while launching the Ethernet based applications
...
better performance. In a connection between a client and a server, the client tells the server the number of bytes it is willing to receive at one time from the server; this is the client's receive window, which becomes the server's send window. Likewise, the server tells the client how many bytes of data it is willing to take from the client at one time; this is the server's receive window and the client's send window. There are chances that window size will drop down to zero dynamically if the receiver is not able to process the data as fast as sender is sending the data. Larger the
...
size settings cacan be tuned
http://www.cyberciti.biz/faq/linux-tcp-tuning/
(linux-kernel/Documentation/networking/ip-sysctl.txt) http://www.cyberciti.biz/files/linux-kernel/Documentation/networking/ip-sysctl.txt
While using the iperf bench marking the -w option can be used to specify the window size.
Bare metal lwIP TCP/IP stack
lwIP TCP/IP stack can be used in RAW API/standalone mode and Netconn API/Socket API using the RTOS features.
...
{lwIPPerfSettings1.png} Figure 5: lwIP TCP/IP performance settings
If xilkernel RTOS is used in the design then following options best suits to achieve the better performance
{lwIPPerfSettings2.png} Figure 6 : lwIP stack settings for socket mode/when used along with RTOS
TCP/IP Offload engines: (TOEs)
Hardware Implementation of TCP/IP or UDP/IP stack in PL should also give the best performance/line rate if the FPGA resources are available. Following are the some of the HW implementations of the TCP/IP or UDP/IP stack available for the Xilinx FPGAs:

Zynq-7000 AP SoC Benchmarking & debugging - Ethernet TechTip

$
0
0
...
Release 1.0
Overview
!!!!!This Page is under construction!!!!!
This Techtip explains the Ethernet debugging and benchmarking methods using the Zynq-7000 AP SoC
Zynq-7000 AP SoC has an in-built dual Giga bit Ethernet controllers which can support 10/100/1000 Mb/s EMAC configurations compatible with the IEEE 802.3-2008 standard. The Programming Logic (PL) sub system of the Zynq-7000 AP SoC can also be instantiated with the additional soft AXI EMAC controllers if the end application requires more than two Giga bit Ethernet Controller.
...
NetPerf benchmarking utility for Linux based solution
NetPerf is a network benchmarking tool that can be used to perform the Ethernet Benchmarking. Following are the two major types of these tests
>• TCPTCP and UDP
...
Sockets interface
>• TCP

TCP
and UDP
NetPerf is split into two pieces: an application NetPerf client, and application NetPerf server. It is able to stream data between the two applications across the network, and communicate via an independent control connection. Following are the options used along with the test
Options:
...
To measure receive throughput, connect to the receive iperf application using the iperf client by issuing the iperf -c command with relevant options. A sample session (with zc702_GigE as reference) is as follows:
C:\>iperf -c 192.168.1.10 -i 5 -t 50 -w 64k
>>
Client connecting to 192.168.1.10, TCP port 5001
TCP window size: 64.0 KB

Zynq-7000 AP SoC Benchmarking & debugging - Ethernet TechTip

$
0
0
...
On Windows 7, select Start > All Programs > Xilinx Design Tools > Vivado 2015.1 > Vivado 2015.1
On Linux, enter Vivado at the command prompt
...
IDE Launch
2.
2. Select “Create
3. In the Create a New Vivado Project window gives summary of further steps, click Next
4. In the Project Name dialog box, type the project name (e.g. Zynq_PS_GEM) and location. Ensure that Create project subdirectory is checked, and then click Next.
...
6. In the Default Part dialog box select Boards and choose ZYNQ-7 ZC702 Evaluation Board or ZYNQ-7 ZC706 Evaluation Board. Make sure that you have selected the proper Board Version to match your hardware because multiple versions of hardware are supported in the Vivado IDE. Click Next.
7. Review the project summary in the New Project Summary dialog box before clicking Finish to create the project. Project summary window similar to Figure 2 will be opened
...
Project Summary
8.
8. In the
9. In the desing_1 drawing view select Add IP as shown in Figure 3 and select Zynq7 Processing System in the next pop-up search window
...
block design
10.
10. Select Run
...
block automation
11.
11. In next
12. In next window select and connect the clock inputs as shown in Figure 5
...
clock inputs
13.
13. Click on
14. Once generation is successful create the HDL wrapper as shown in Figure 6 and select the default option in next window pop-up
...
HDL wrapper
15.
15. Similarly select
...
to SDK
16.
16. SDK tool
{SDK_launch123.JPG} Figure 8: SDK launch window
17. Create a new application project to create the First stage boot loader (FSBL) as shown in the Figure 9
{SDK_APP_create.png}
18.
Figure 9: Launching the Application project 18. Enter the
19. In the Template wizard select Zynq FSBL and select Finish. Then SDK builds the Zynq FSBL if auto build is enabled
20. Next step is to create the boot.bin file. U-boot.elf file is also needed along with FSBL. Refer the Techtip on building u-boot or copy the u-boot.elf from the provided design files.

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
Release 1.0
Overview
!!!!!This Page is under construction!!!!!
This techtip describes the challenges in achieving the best Ethernet performance and best design practices to achieve the better performance using the Zynq-7000 AP SoC. This techtip explains briefly on the various solutions available for the achieving the better performance using the Zynq-7000 AP SoC, steps to re-create, compile and run the design where ever it is possible. This paper also explains various ways to implement the TCP/IP protocols and discusses the advantages on each implementations like TCP/IP offload engine, software implementations of stack like lwIP and Linux Ethernet sub-system.
Zynq-7000 AP SoC has an in-built dual Giga bit Ethernet controllers which can support 10/100/1000 Mb/s EMAC configurations compatible with the IEEE 802.3-2008 standard. The Programming Logic (PL) sub system of the Zynq-7000 AP SoC can also be configured with additional soft AXI EMAC controllers if the end application requires more than two Giga bit Ethernet Controller. Following is the example block diagram of the Zynq-7000 AP SoC with GEMACs using the ZC706 Development board
...
Steps to create the PS EMIO Ethernet solution, PL Ethernet solution and setting up the Embedded Linux for Zynq, refer the following link:
http://www.wiki.xilinx.com/Zynq+PL+Ethernet
...
user space i.e
These commands can be applied
once the Linux kernel/XAPP1082 image is booted
...
AP SoC
Tuning the task priorities using the ‘nice’ system call form the user space
Run the command ps –all to get the list of tasks and their PID, identify the network tasks and change the priorities using the nice sys call. The example format is shown below
...
[root@linux /]#echo 01 > /proc/irq/19/smp_affinity
To share the load between the two Cortex A9s Taskset2 utility can be used while launching the Ethernet based applications
...
better performance. InIn a connection
http://www.cyberciti.biz/faq/linux-tcp-tuning/
(linux-kernel/Documentation/networking/ip-sysctl.txt) http://www.cyberciti.biz/files/linux-kernel/Documentation/networking/ip-sysctl.txt

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
Steps to create the PS EMIO Ethernet solution, PL Ethernet solution and setting up the Embedded Linux for Zynq, refer the following link:
http://www.wiki.xilinx.com/Zynq+PL+Ethernet
...
from user spacespace_
These commands
...
AP SoC
Tuning the task priorities using the ‘nice’ system call form the user space
Run the command ps –all to get the list of tasks and their PID, identify the network tasks and change the priorities using the nice sys call. The example format is shown below
[root@linux /]#nice/]>nice–n -3
CPU affinity for the interrupt handlers/task: This will make sure the minimal cache operations as complete application/task is attached to a single core.
[root@linux /]#echo/]>echo 01 >
To share the load between the two Cortex A9s Taskset2 utility can be used while launching the Ethernet based applications
Window Size is also a configurable options for better performance. In a connection between a client and a server, the client tells the server the number of bytes it is willing to receive at one time from the server; this is the client's receive window, which becomes the server's send window. Likewise, the server tells the client how many bytes of data it is willing to take from the client at one time; this is the server's receive window and the client's send window. There are chances that window size will drop down to zero dynamically if the receiver is not able to process the data as fast as sender is sending the data. Larger the size then more chances to get the better performance. In Linux environment the Window size settings can be tuned by following the steps explained in the following links:

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
Steps to create the PS EMIO Ethernet solution, PL Ethernet solution and setting up the Embedded Linux for Zynq, refer the following link:
http://www.wiki.xilinx.com/Zynq+PL+Ethernet
_FollowingFollowing are some
...
from user space_space
These commands can be applied once the Linux kernel/XAPP1082 image is booted on Zynq-7000 AP SoC
Tuning the task priorities using the ‘nice’ system call form the user space

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
Tuning the task priorities using the ‘nice’ system call form the user space
Run the command ps –all to get the list of tasks and their PID, identify the network tasks and change the priorities using the nice sys call. The example format is shown below
[root@linux /]>niceroot@linux>nice–n -3
CPU affinity for the interrupt handlers/task: This will make sure the minimal cache operations as complete application/task is attached to a single core.
[root@linux /]>echoroot@linux>echo 01 >
To share the load between the two Cortex A9s Taskset2 utility can be used while launching the Ethernet based applications
Window Size is also a configurable options for better performance. In a connection between a client and a server, the client tells the server the number of bytes it is willing to receive at one time from the server; this is the client's receive window, which becomes the server's send window. Likewise, the server tells the client how many bytes of data it is willing to take from the client at one time; this is the server's receive window and the client's send window. There are chances that window size will drop down to zero dynamically if the receiver is not able to process the data as fast as sender is sending the data. Larger the size then more chances to get the better performance. In Linux environment the Window size settings can be tuned by following the steps explained in the following links:

Zynq-7000 AP SoC Performance – Gigabit Ethernet achieving the best performance

$
0
0
...
If xilkernel RTOS is used in the design then following options best suits to achieve the better performance
{lwIPPerfSettings2.png} Figure 6 : lwIP stack settings for socket mode/when used along with RTOS
TCP/IP Offload engines: (TOEs)
Hardware Implementation of TCP/IP or UDP/IP stack in PL should also give the best performance/line rate if the FPGA resources are available. Following are the some of the HW implementations of the TCP/IP or UDP/IP stack available for the Xilinx FPGAs:
iTOE from OKI IDS Co.,Ltd http://www.xilinx.com/products/intellectual-property/1-35md2k.html
http://www.xilinx.com/products/intellectual-property/1-41z4xb.html
1G TOE & EMAC core, Full TCP Offload from intilop
http://www.xilinx.com/products/intellectual-property/1-3zvp6v.html
16K Session - 10G TCP & UDP Full Offload
http://www.xilinx.com/products/intellectual-property/1-58sbvm.html
TOE-IP Core from design Gateway
http://www.xilinx.com/products/intellectual-property/1-8dyf-2165.html
IEEE material on TOEs http://www.missinglinkelectronics.com/files/papers/A_10_GbE_TCPIP_Hardware_Stack_as_part_of_a_Protocol_Acceleration_Platform.pdf

Conclusion
This techtip explained the Gigabit Ethernet solutions using the Zynq-7000 AP SoC, application data path, Ethernet performance, types of TCP/IP stack implementations, solutions readily available using the Zynq-7000 AP SoC, techniques which can be applied and achieve the maximum possible Ethernet data performance.
Viewing all 11776 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>