Quantcast
Channel: Xilinx Wiki : Xilinx Wiki - all changes
Viewing all 11776 articles
Browse latest View live

results.png


Self hosting using smart

$
0
0
Using Smart, you will be able to install the tools required to build(compile) applications on the target directly.
Installing core-SDK using Smart
The package-group we are interested is "packagegroup-core-sdk" . This package-group provides some commonly used utilities like Make, GCC, G++, etc. In this section we will be installing the package-group and then compile, and test out few demo applications.
GCC/G++ installation
We need to install packagegroup-core-sdk to be able to compile on target. To install run the below command.
smart install packagegroup-core-sdk
GCC/G++ testing
Once, SDK is installed, you should be able to use GCC/G++ compiler and build your programs on the target directly.
To test this, there are 2 sample programs attached below. Go ahead and download those onto your target. You can compile and build them as shown below.
To compile use:
{compile.png} GCC/G++ commands for compiling examples.
Run the generated executable and verify the output:
{results.png} Expected results
Attachments:
{hello_world.c} {vector-test.cpp}
Related Links
http://www.wiki.xilinx.com/Install+and+run+applications+through+Smart+on+target
Xilinx Yocto

Yocto

$
0
0
...
Using meta-petalinux to build RPM packages
How to add additional packages
BuildGetting started with Yocto Xilinx layer
Build
OSL images using Yocto
Adding Mali Userspace binaries
Using meta-xilinx-tools layer
...
Partition and format SD card for SD boot
Install and run applications through Smart package manager
Self hosting using smart
Booting Xen on ZCU102 using SD card
SD boot with Yocto images

QEMU - Zynq UltraScalePlus

$
0
0
...
In the first terminal, start the ARM cores
$ ./aarch64-softmmu/qemu-system-aarch64 -M arm-generic-fdt -serial mon:stdio -serial /dev/null -display none \
-device# -device loader,addr=0xfd1a0104,data=0x8000000e,data-len=4 \
...
Un-reset the A53A53, only do this if not using the line below (if you are unsure, don't use this)
-global xlnx,zynqmp-boot.cpu-num=0
...
# Setup multi-bootmulti-boot, if doing this you don't need to in-reset the APU
-device loader,file=./pre-built/linux/images/bl31.elf,cpu-num=0 \ # ARM Trusted Firmware
-device loader,file=./pre-built/linux/images/Image,addr=0x00080000 \ # Linux kernel
...
-hw-dtb ./pre-built/linux/images/zynqmp-qemu-multiarch-pmu.dtb \
-kernel ./images/zynqmp/petalinux-v2017.1/pmu_rom_qemu_sha3.elf \ # PMU ROM
--device-device loader,file=/images/zynqmp/petalinux-v2017.1/pmufw.elf \
...

-machine-path ./qemu-tmp \ # A
...
each instance
-device loader,addr=0xfd1a0074,data=0x1011003,data-len=4 -device loader,addr=0xfd1a007C,data=0x1010f03,data-len=4 # Write some important data into memory

Debugging the A53s and the R5s from GDB
To debug the A53s and the R5s from GDB you will need to use a multiarch version of GDB. This can either be compilled from source or is included in some Linux distros. You will have to make sure the ARMv7 and ARMv8 support are both included in GDB.

QEMU - Zynq UltraScalePlus

$
0
0
...
$ ./aarch64-softmmu/qemu-system-aarch64 -M arm-generic-fdt -serial mon:stdio -serial /dev/null -display none \
-device loader,addr=0xfd1a0104,data=0x8000000e,data-len=4 \ # Un-reset the A53
-device loader,file=./pre-built/linux/images/bl31.elf,cpu=0loader,file=./pre-built/linux/images/bl31.elf,cpu-num=0 \ #
-device loader,file=./pre-built/linux/images/u-boot.elf\ # The u-boot exectuable
-hw-dtb ./pre-built/linux/images/zynqmp-qemu-arm.dtb # HW Device Tree that QEMU uses to generate the model

XEN Hypervisor

$
0
0

Overview
Xen is an opensource type-1 Hypervisor which is maintained by the Xen Project. Xen allows users to run multiple instances of operating systems or baremetal code on a single host. For more information on what the Xen hypervisor is have a look at the projects overiew here: Xen Project Software Overview
...
{Xen3_27Mar.JPG}
{Xen4_27Mar.JPG}
...
Hypervisor with 2017.1 or newer tools2017.3
General information for configuring and Building Linux Dom0
General information for configuring and Building Linux DomU
Building a EL1 baremetal DomU guest with Xilinx SDK
Using the Xen Hypervisor with 2017.1/2017.2

Building the Xen Hypervisor with PetaLinux 2016.4 or newer
General information for configuring and Building Linux Dom0

Building the Xen Hypervisor with PetaLinux 2017.1

$
0
0

Overview
The guide below shows you how to build Xen, boot Xen and then run some example configurations on ZU+. The steps below use PetaLinux and assume you have some knowledge of using PetaLinux.
Before starting you need to create a PetaLinux project. It is assumed that a default PetaLinux reference design is used unchanged in these instructions.
The default PetaLinux configuration has images ready to do boot Xen, these are the pre-built images. You can use those or you can manually edit receipes and build Xen yourself. The pre-built images can be found in this directory (inside a PetaLinux project) pre-built/linux/images/ and prefixed with "xen-". You can either use the pre-builts or follow the next section to configure and build Xen yourself. If you are using the pre-builts you can skip to the booting Xen section for your release version.
Configuring and building XEN from source using PetaLinux 2017.1
First let's enable Xen to be built by default.
$ petalinux-config -c rootfs
Now let's enable Xen:
Filesystem Packages ---> misc ---> packagegroup-petalinux-xen ---> [*] packagegroup-petalinux-xen
Now we need to change the rootFS to be an INITRD
$ petalinux-config
And change
Image Packaging Configuration ---> Root filesystem type (INITRAMFS) ---> (X) INITRD
NOTE: This means that any images built will NOT have the rootFS in the Image that is built by PetaLinux. This means you will need to edit any scripts or configs that expect the rootFS to be included. This includes the Xen configs mentioned later.
You can still use the prebuilt Image file which does still include the rootFS.
We also want to edit the device tree to build in the extra Xen related configs.
Edit this file
project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi
and add this line: /include/ "xen-overlay.dtsi".
It should look like this for harware:
/include/ "system-conf.dtsi"
/include/ "xen-overlay.dtsi"
/ {
};
or like this for QEMU:
/include/ "system-conf.dtsi"
/include/ "xen-overlay.dtsi"
/ {
cpus {
cpu@1 {
device_type = "none";
};
cpu@2 {
device_type = "none";
};
cpu@3 {
device_type = "none";
};
};
};
NOTE: There is a bug on QEMU where the CPUs running in SMP sometimes cause hangs. To avoid this we only tell Xen about a single CPU.
Also edit this file:
project-spec/meta-user/recipes-bsp/device-tree/device-tree-generation_%.bbappend
and add this line to it: file://xen-overlay.dtsi.
The file should look like this:
SRC_URI_append ="\
file://system-user.dtsi \
file://xen-overlay.dtsi \
"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
Then run petaliux-build:
$ petalinux-build
TFTP Booting Xen and Dom0 2017.1
Run Xen dom0 on QEMU:
To use the prebuilt Xen run:
$ petalinux-boot --qemu --prebuilt 2 --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=pre-built/linux/images"
To use the Xen you built yourself run:
$ petalinux-boot --qemu --u-boot
Run Xen dom0 on HW:
To use the prebuilt Xen on hardware:
$ petalinux-boot --jtag --prebuilt 2
To use the Xen you built yourself run:
$ petalinux-boot --jtag --u-boot
You should eventually see something similar to this, when you do press any key to stop the autoboot.
Hit any key to stop autoboot:
If u-boot wasn't able to get an IP address from the DHCP server you may need to manually set the serverip (it's typically 10.0.2.2 for QEMU):
$ setenv serverip 10.0.2.2
Now to download and boot Xen, if running on QEMU, use xen-qemu.dtb otherwise use xen.dtb. Example:
TFTPing Xen from pre-built images
$ tftpb 4000000 xen-qemu.dtb; tftpb 0x80000 xen-Image; tftpb 6000000 xen.ub; tftpb 0x1000000 xen-rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
TFTPing Xen from your own images
$ tftpb 4000000 system.dtb; tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
Below is an example of what you will see.
[...]
BOOTP broadcast 1
DHCP client bound to address 10.0.2.15 (2 ms)
Hit any key to stop autoboot: 0
ZynqMP>
ZynqMP> setenv serverip 10.0.2.2
ZynqMP> tftpb 4000000 system.dtb; tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
Using ethernet@ff0e0000 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'system.dtb'.
Load address: 0x4000000
Loading: #########
5.8 MiB/s
done
Bytes transferred = 42748 (a6fc hex)
Using ethernet@ff0e0000 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'Image'.
Load address: 0x80000
Loading: #################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
[...]
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
##############################
8.2 MiB/s
done
Bytes transferred = 43688238 (29aa12e hex)
## Booting kernel from Legacy Image at 06000000 ...
Image Name:
Image Type: AArch64 Linux Kernel Image (uncompressed)
Data Size: 721216 Bytes = 704.3 KiB
Load Address: 05000000
Entry Point: 05000000
Verifying Checksum ... OK
## Loading init Ramdisk from Legacy Image at 01000000 ...
Image Name: petalinux-user-image-plnx_aarch6
Image Type: AArch64 Linux RAMDisk Image (gzip compressed)
Data Size: 43688174 Bytes = 41.7 MiB
Load Address: 00000000
Entry Point: 00000000
Verifying Checksum ... OK
## Flattened Device Tree blob at 04000000
Booting using the fdt blob at 0x4000000
Loading Kernel Image ... OK
Loading Ramdisk to 7b539000, end 7dee30ee ... OK
Loading Device Tree to 0000000007ff2000, end 0000000007fff6fb ... OK
Starting kernel ...
Xen 4.8.1-pre
(XEN) Xen version 4.8.1-pre (alistai@) (aarch64-xilinx-linux-gcc (Linaro GCC 6.2-2016.11) 6.2.1 20161016) debug=n Wed Mar 22 14:11:36 MDT 2017
(XEN) Latest ChangeSet: Wed Feb 22 15:46:19 2017 +0100 git:e9e1b9b-dirty
(XEN) Processor: 410fd034: "ARM Limited", variant: 0x0, part 0xd03, rev 0x4
(XEN) 64-bit Execution:
(XEN) Processor Features: 0000000000002222 0000000000000000
(XEN) Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
(XEN) Extensions: FloatingPoint AdvancedSIMD
(XEN) Debug Features: 0000000010305006 0000000000000000
(XEN) Auxiliary Features: 0000000000000000 0000000000000000
(XEN) Memory Model Features: 0000000000001122 0000000000000000
(XEN) ISA Features: 0000000000011120 0000000000000000
(XEN) 32-bit Execution:
(XEN) Processor Features: 00001231:00011011
(XEN) Instruction Sets: AArch32 A32 Thumb Thumb-2 ThumbEE Jazelle
(XEN) Extensions: GenericTimer Security
(XEN) Debug Features: 03010066
(XEN) Auxiliary Features: 00000000
(XEN) Memory Model Features: 10101105 40000000 01260000 02102211
(XEN) ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121
(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 50000 KHz
(XEN) GICv2 initialization:
(XEN) gic_dist_addr=00000000f9010000
(XEN) gic_cpu_addr=00000000f9020000
(XEN) gic_hyp_addr=00000000f9040000
(XEN) gic_vcpu_addr=00000000f9060000
(XEN) gic_maintenance_irq=25
(XEN) GICv2: Adjusting CPU interface base to 0xf902f000
(XEN) GICv2: 192 lines, 4 cpus (IID 00000000).
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) Bringing up CPU1
(XEN) Bringing up CPU2
(XEN) Bringing up CPU3
(XEN) Brought up 4 CPUs
(XEN) P2M: 40-bit IPA with 40-bit PA
(XEN) P2M: 3 levels with order-1 root, VTCR 0x80023558
/amba@0/smmu0@0xFD800000: Decode error: write to 6c=0
(XEN) I/O virtualisation enabled
(XEN) - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
[...]
Starting syslogd/klogd: done
Starting /usr/sbin/xenstored...
Setting domain 0 name, domid and JSON config...
Done setting up Dom0
Starting xenconsoled...
Starting QEMU as disk backend for dom0
Starting domain watchdog daemon: xenwatchdogd startup
[done]
Starting tcf-agent: OK
PetaLinux 2017.1 plnx_aarch64 /dev/hvc0
INIT: Id "PS0" respawning too fast: disabled for 5 minutes
PetaLinux 2017.1 plnx_aarch64 /dev/hvc0
plnx_aarch64 login:
Login using 'root' as the username and password
SD Booting Xen and Dom0 2017.1
To boot Xen from an SD card you need to copy the following files to the boot partition of the SD card:
BOOT.bin
Image
the compiled device tree file renamed to system.dtb (xen.dtb or xen-qemu.dtb for QEMU from the pre-built images, system.dtb from a Petalinux build)
xen.ub
rootfs.cpio.gz.u-boot (Only if using initrd instead of initramfs for the rootfs)
When using the pre-built images from the BSP, copy these files from <project-dir>/pre-built/linux/images. The prebuilt images are built to support both Linux without Xen and with Xen such that some of the Xen based image file names are different than in a normal Petalinux build. The prebuilt Linux kernel image includes an initramfs rootfs. Petalinux builds (rather than prebuilt images) require an initrd rootfs such that another file for the rootfs must also be used as described below.
RootFS in Kernel (initramfs)
This method allows the use of a Linux kernel with an initramfs (such as the prebuilt image) . Boot the SD card on hardware or QEMU and stop the u-boot autoboot. At the u-boot prompt run:
mmc dev $sdbootdev && mmcinfo; load mmc $sdbootdev:$partid 4000000 system.dtb && load mmc $sdbootdev:$partid 0x80000 Image; fdt addr 4000000; mmc $sdbootdev:$partid 6000000 xen.ub; bootm 6000000 - 4000000
This would also allow a rootfs mounted on the SD card to be used. It requires that you extract the rootFS (cpio file) to the root partition on the SD card and setup the kernel to expect the rootfs on the SD card in the kernel command line.
RootFS mounted on RAM (initrd)
This method is required when using a Linux image which is initrd based and does not include a rootfs. The rootfs.cpio.gz.u-boot file will be loaded in memory from u-boot. Then boot the SD card on hardware or QEMU and stop the u-boot autoboot. At the u-boot prompt run:
mmc dev $sdbootdev && mmcinfo; load mmc $sdbootdev:$partid 4000000 system.dtb && load mmc $sdbootdev:$partid 0x80000 Image; fdt addr 4000000; load mmc $sdbootdev:$partid 6000000 xen.ub; load mmc $sdbootdev:$partid 9000000 rootfs.cpio.gz.u-boot; bootm 6000000 9000000 4000000
Starting simple additional guests (PetaLinux 2016.4 or later)
If running on QEMU, we'll need to setup a port mapping for port 22 (SSH) in our VM.
In this example, we forward the hosts port 2222 to the VM's port 22.
$ petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=images/linux,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22"
Once you hit the u-boot prompt, follow the steps in the earlier section on how to run Xen dom0.
When dom0 has finished booting, we'll need to copy a guest Image into dom0's filesystem.
We'll use the base prebuilt PetaLinux Image as our domU guest.
If running on QEMU, we use scp's -P option to connect to our hosts port 2222 where QEMU will forward the connection to the guests port 22:
To target QEMU run the following on the host:
scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 2222 images/linux/Image root@localhost:/boot/
If running on hardware run the following on the host:
scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no images/linux/Image root@<board-ip>:/boot/
If you would prefer to load DomU's kernel to the guest via SD card, you can follow the instructions in the "Starting Linux guests with Pass-through networking" section.
The xen-image-minimal rootFS includes some prepared configurations that you can use. These are located in '/etc/xen/'
$ cd /etc/xen
To start a simple guest run the following from the dom0 prompt
xl create -c example-simple.cfg
You'll see another instance of Linux booting up.
At any time you can leave the console of the guest and get back to dom0 by pressing ctrl+].
Once at the dom0 prompt you can list the guests from dom0:
xl list
To get back to the guests console:
xl console guest0
You can create further guests by for example running:
xl create example-simple.cfg name=\"guest1\"
xl create example-simple.cfg name=\"guest2\"
root@plnx_aarch64:/etc/xen# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 512 1 r----- 79.8
Domain-0 0 512 1 r----- 79.8
guest0 1 256 2 ------ 93.7
guest1 2 256 2 ------ 26.6
guest2 3 256 2 ------ 1.8
To destroy a guest:
xl destroy guest0
CPU Pinning
The following will only work on QEMU with multi-core enabled or on real HW.
When running multiple guests with multiple Virtual CPUs, Xen will schedule the various vCPUs onto real physical CPUs.
The rules and considerations taken in scheduling decisions depend on the chosen scheduler and the configuration.
To avoid having multiple vCPUs share a single pCPU, it is possible to pin a vCPU onto a pCPU and to give it exclusive access.
To create a simple guest with one Virtual CPU pinned to Physical CPU #3, you can do the following:
xl create example-simple.cfg 'name="g0"''vcpus="1"''cpus="3"'
Another way to pin virtual CPUs on to Physical CPUs is to create dedicated cpu-pools.
This has the advantage of isolating the scheduling instances.
By default a single cpu-pool named Pool-0 exists. It contains all the physical cpus.
We'll now create our pool named rt using the credit2 scheduler.
xl cpupool-create 'name="rt"''sched="credit"'
xl cpupool-cpu-remove Pool-0 3
xl cpupool-cpu-add rt 3
Now we are ready to create a guest with a single vcpu pinned to physical CPU #3.
xl create /etc/xen/example-simple.cfg 'vcpus="1"''pool="rt"''cpus="3"''name="g0"'
Starting Linux guests with Para-Virtual networking (PetaLinux 2016.4 or later)
This time we will run QEMU slightly different. We'll create two port mappings. One for dom0's SSH port and another for the Para-Virtual domU.
The default IP addresses assigned by QEMUs builtin DHCP server start from 10.0.2.15 and count upwards.
Dom0 will be assigned 10.0.2.15, the next guest 10.0.2.16 and so on.
So here's the command line that maps host port 2222 to dom0 port 22 and 2322 to domUs port 22.
petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=./images/linux/,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22,hostfwd=tcp:127.0.0.1:2322-10.0.2.16:22"
Now, follow the instructions from section 1 on how to boot Xen dom0.
Once you are at the dom0 prompt and have copied a domU image we'll need to setup the networking.
In this example, we will configure the guests to directly join the external network by means of a bridge.
First of all, we need to de-configure the default setup.
Kill the dhcp client for eth0:
# killall -9 udhcpc
List and remove existing addresses from eth0:
# ip addr show dev eth0
In our example the address is 10.0.2.15/24:
# ip addr del 10.0.2.15/24 dev eth0
Then, create the bridge and start DHCP on it for dom0:
# brctl addbr xenbr0
# brctl addif xenbr0 eth0
# /sbin/udhcpc -i xenbr0 -b
You should see something like the following:
udhcpc (v1.24.1) started
[ 186.459495] xenbr0: port 1(eth0) entered blocking state
[ 186.461194] xenbr0: port 1(eth0) entered forwarding state
Sending discover...
Sending select for 10.0.2.15...
Lease of 10.0.2.15 obtained, lease time 86400
/etc/udhcpc.d/50default: Adding DNS 10.0.2.3
Similar to before we will use the pre-defined examples in '/etc/xen/'
$ cd /etc/xen
# xl create -c example-pvnet.cfg
You should see a new linux instance boot up.
Now we'll ssh into the domU from the host running Para-Virtual networking:
ssh -p 2322 root@localhost
Starting Linux guests with Pass-through networking (PetaLinux 2017.1)
The difficulty with using pass through networking is that the steps above use Dom0 networking to load the DomU boot image onto the guest. This won't work with pass through networking as Dom0 never has any networking avaliable.
You will need to find a way to get the kernel and rootFS (the pre-built Image file) onto the guest. The steps below are used to get the Image file onto a SD card image and attach it to QEMU. Similar steps can be followed for hardware, excpet just copy the Image file to a formated SD card and insert it into the board.
Create and format the file we will be using on your host:
$ dd if=/dev/zero of=qemu_sd.img bs=128M count=1
$ mkfs.vfat -F 32 qemu_sd.img
Copy the Image file onto the card.
NOTE: We are using the pre-built Image which contains a kernel and rootFS. If you use the Image you built above then no rootFS is included. You will need to copy the rootFS onto the SD card and edit the Xen config file later to specify a rootFS.
$ mcopy -i qemu_sd.img ./pre-built/linux/images/Image ::/
Now boot QEMU with this extra option appened inside the --qemu-args: -drive file=qemu_sd.img,if=sd,format=raw,index=1
The full command should look something like this for your prebuilt images:
petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=./pre-built/linux/images/,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22,hostfwd=tcp:127.0.0.1:2322-10.0.2.16:22 -drive file=qemu_sd.img,if=sd,format=raw,index=1"
The full command should look something like this for your own images:
petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -nic -net nic -net user,tftp=./images/linux/,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22,hostfwd=tcp:127.0.0.1:2322-10.0.2.16:22 -drive file=qemu_sd.img,if=sd,format=raw,index=1"
Then boot Dom0 following the steps above, with one difference. You will need to make sure that you tell Xen about the network passthrough. To do this you will need to edit the device tree. We are going to use u-boot to edit the device tree.
After loading the device tree to memory you will need to run this: fdt addr $fdt_addr && fdt resize 128; fdt set /amba/ethernet@ff0e0000 status "disabled" && fdt set /amba/ethernet@ff0e0000 xen,passthrough "1"
The full command for booting prebuilt images you built is shown below:
$ tftpb 4000000 xen-qemu.dtb; fdt addr 4000000 && fdt resize 128; fdt set /amba/ethernet@ff0e0000 status "disabled" && fdt set /amba/ethernet@ff0e0000 xen,passthrough "1" && tftpb 0x80000 xen-Image; tftpb 6000000 xen.ub; tftpb 0x1000000 xen-rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
The full command for booting images you built is shown below:
$ tftpb 4000000 system.dtb; fdt addr 4000000 && fdt resize 128; fdt set /amba/ethernet@ff0e0000 status "disabled" && fdt set /amba/ethernet@ff0e0000 xen,passthrough "1" && tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
NOTE: If running on hardware you will need to make a change to allow the DMA transactions. See here for more details: Passthrough Network Example
Once you have logged onto the system mount the SD card and copy the image.
# mount /dev/mmcblk0 /mnt/
# cp /mnt/Image /boot/
Similar to before we will use another pre-defined examples in '/etc/xen/'
$ cd /etc/xen
# xl create -c example-passnet.cfg

Building the Xen Hypervisor with PetaLinux 2016.4 and newer

$
0
0
...
plnx_aarch64 login:
Login using 'root' as the username
Configuring and building XEN from source using PetaLinux 2017.1
First let's enable Xen to be built by default.
$ petalinux-config -c rootfs
Now let's enable Xen:
Filesystem Packages ---> misc ---> packagegroup-petalinux-xen ---> [*] packagegroup-petalinux-xen
Now we need to change the rootFS to be an INITRD
$ petalinux-config
And change
Image Packaging Configuration ---> Root filesystem type (INITRAMFS) ---> (X) INITRD
NOTE: This means that any images built will NOT have the rootFS in the Image that is built by PetaLinux. This means you will need to edit any scripts or configs that expect the rootFS to be included. This includes the Xen configs mentioned later.
You can still use the prebuilt Image file which does still include the rootFS.
We also want to edit the device tree to build in the extra Xen related configs.
Edit this file
project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi
and add this line: /include/ "xen-overlay.dtsi".
It should look like this for harware:
/include/ "system-conf.dtsi"
/include/ "xen-overlay.dtsi"
/ {
};
or like this for QEMU:
/include/ "system-conf.dtsi"
/include/ "xen-overlay.dtsi"
/ {
cpus {
cpu@1 {
device_type = "none";
};
cpu@2 {
device_type = "none";
};
cpu@3 {
device_type = "none";
};
};
};
NOTE: There is a bug on QEMU where the CPUs running in SMP sometimes cause hangs. To avoid this we only tell Xen about a single CPU.
Also edit this file:
project-spec/meta-user/recipes-bsp/device-tree/device-tree-generation_%.bbappend
and add this line to it: file://xen-overlay.dtsi.
The file should look like this:
SRC_URI_append ="\
file://system-user.dtsi \
file://xen-overlay.dtsi \
"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
Then run petaliux-build:
$ petalinux-build
TFTP Booting Xen and Dom0 2017.1
Run Xen dom0 on QEMU:
To use the prebuilt Xen run:
$ petalinux-boot --qemu --prebuilt 2 --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=pre-built/linux/images"
To use the Xen you built yourself run:
$ petalinux-boot --qemu --u-boot
Run Xen dom0 on HW:
To use the prebuilt Xen on hardware:
$ petalinux-boot --jtag --prebuilt 2
To use the Xen you built yourself run:
$ petalinux-boot --jtag --u-boot
You should eventually see something similar to this, when you do press any key to stop the autoboot.
Hit any key to stop autoboot:
If u-boot wasn't able to get an IP address from the DHCP server you may need to manually set the serverip (it's typically 10.0.2.2 for QEMU):
$ setenv serverip 10.0.2.2
Now to download and boot Xen, if running on QEMU, use xen-qemu.dtb otherwise use xen.dtb. Example:
TFTPing Xen from pre-built images
$ tftpb 4000000 xen-qemu.dtb; tftpb 0x80000 xen-Image; tftpb 6000000 xen.ub; tftpb 0x1000000 xen-rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
TFTPing Xen from your own images
$ tftpb 4000000 system.dtb; tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
Below is an example of what you will see.
[...]
BOOTP broadcast 1
DHCP client bound to address 10.0.2.15 (2 ms)
Hit any key to stop autoboot: 0
ZynqMP>
ZynqMP> setenv serverip 10.0.2.2
ZynqMP> tftpb 4000000 system.dtb; tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
Using ethernet@ff0e0000 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'system.dtb'.
Load address: 0x4000000
Loading: #########
5.8 MiB/s
done
Bytes transferred = 42748 (a6fc hex)
Using ethernet@ff0e0000 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'Image'.
Load address: 0x80000
Loading: #################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
[...]
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
##############################
8.2 MiB/s
done
Bytes transferred = 43688238 (29aa12e hex)
## Booting kernel from Legacy Image at 06000000 ...
Image Name:
Image Type: AArch64 Linux Kernel Image (uncompressed)
Data Size: 721216 Bytes = 704.3 KiB
Load Address: 05000000
Entry Point: 05000000
Verifying Checksum ... OK
## Loading init Ramdisk from Legacy Image at 01000000 ...
Image Name: petalinux-user-image-plnx_aarch6
Image Type: AArch64 Linux RAMDisk Image (gzip compressed)
Data Size: 43688174 Bytes = 41.7 MiB
Load Address: 00000000
Entry Point: 00000000
Verifying Checksum ... OK
## Flattened Device Tree blob at 04000000
Booting using the fdt blob at 0x4000000
Loading Kernel Image ... OK
Loading Ramdisk to 7b539000, end 7dee30ee ... OK
Loading Device Tree to 0000000007ff2000, end 0000000007fff6fb ... OK
Starting kernel ...
Xen 4.8.1-pre
(XEN) Xen version 4.8.1-pre (alistai@) (aarch64-xilinx-linux-gcc (Linaro GCC 6.2-2016.11) 6.2.1 20161016) debug=n Wed Mar 22 14:11:36 MDT 2017
(XEN) Latest ChangeSet: Wed Feb 22 15:46:19 2017 +0100 git:e9e1b9b-dirty
(XEN) Processor: 410fd034: "ARM Limited", variant: 0x0, part 0xd03, rev 0x4
(XEN) 64-bit Execution:
(XEN) Processor Features: 0000000000002222 0000000000000000
(XEN) Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
(XEN) Extensions: FloatingPoint AdvancedSIMD
(XEN) Debug Features: 0000000010305006 0000000000000000
(XEN) Auxiliary Features: 0000000000000000 0000000000000000
(XEN) Memory Model Features: 0000000000001122 0000000000000000
(XEN) ISA Features: 0000000000011120 0000000000000000
(XEN) 32-bit Execution:
(XEN) Processor Features: 00001231:00011011
(XEN) Instruction Sets: AArch32 A32 Thumb Thumb-2 ThumbEE Jazelle
(XEN) Extensions: GenericTimer Security
(XEN) Debug Features: 03010066
(XEN) Auxiliary Features: 00000000
(XEN) Memory Model Features: 10101105 40000000 01260000 02102211
(XEN) ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121
(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 50000 KHz
(XEN) GICv2 initialization:
(XEN) gic_dist_addr=00000000f9010000
(XEN) gic_cpu_addr=00000000f9020000
(XEN) gic_hyp_addr=00000000f9040000
(XEN) gic_vcpu_addr=00000000f9060000
(XEN) gic_maintenance_irq=25
(XEN) GICv2: Adjusting CPU interface base to 0xf902f000
(XEN) GICv2: 192 lines, 4 cpus (IID 00000000).
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) Bringing up CPU1
(XEN) Bringing up CPU2
(XEN) Bringing up CPU3
(XEN) Brought up 4 CPUs
(XEN) P2M: 40-bit IPA with 40-bit PA
(XEN) P2M: 3 levels with order-1 root, VTCR 0x80023558
/amba@0/smmu0@0xFD800000: Decode error: write to 6c=0
(XEN) I/O virtualisation enabled
(XEN) - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
[...]
Starting syslogd/klogd: done
Starting /usr/sbin/xenstored...
Setting domain 0 name, domid and JSON config...
Done setting up Dom0
Starting xenconsoled...
Starting QEMU as disk backend for dom0
Starting domain watchdog daemon: xenwatchdogd startup
[done]
Starting tcf-agent: OK
PetaLinux 2017.1 plnx_aarch64 /dev/hvc0
INIT: Id "PS0" respawning too fast: disabled for 5 minutes
PetaLinux 2017.1 plnx_aarch64 /dev/hvc0
plnx_aarch64 login:
Login using 'root' as the username and password
SD Booting Xen and Dom0 2017.1
To boot Xen from an SD card you need to copy the following files to the boot partition of the SD card:
BOOT.bin
Image
the compiled device tree file renamed to system.dtb (xen.dtb or xen-qemu.dtb for QEMU from the pre-built images, system.dtb from a Petalinux build)
xen.ub
rootfs.cpio.gz.u-boot (Only if using initrd instead of initramfs for the rootfs)
When using the pre-built images from the BSP, copy these files from <project-dir>/pre-built/linux/images. The prebuilt images are built to support both Linux without Xen and with Xen such that some of the Xen based image file names are different than in a normal Petalinux build. The prebuilt Linux kernel image includes an initramfs rootfs. Petalinux builds (rather than prebuilt images) require an initrd rootfs such that another file for the rootfs must also be used as described below.
RootFS in Kernel (initramfs)
This method allows the use of a Linux kernel with an initramfs (such as the prebuilt image) . Boot the SD card on hardware or QEMU and stop the u-boot autoboot. At the u-boot prompt run:
mmc dev $sdbootdev && mmcinfo; load mmc $sdbootdev:$partid 4000000 system.dtb && load mmc $sdbootdev:$partid 0x80000 Image; fdt addr 4000000; mmc $sdbootdev:$partid 6000000 xen.ub; bootm 6000000 - 4000000
This would also allow a rootfs mounted on the SD card to be used. It requires that you extract the rootFS (cpio file) to the root partition on the SD card and setup the kernel to expect the rootfs on the SD card in the kernel command line.
RootFS mounted on RAM (initrd)
This method is required when using a Linux image which is initrd based and does not include a rootfs. The rootfs.cpio.gz.u-boot file will be loaded in memory from u-boot. Then boot the SD card on hardware or QEMU and stop the u-boot autoboot. At the u-boot prompt run:
mmc dev $sdbootdev && mmcinfo; load mmc $sdbootdev:$partid 4000000 system.dtb && load mmc $sdbootdev:$partid 0x80000 Image; fdt addr 4000000; load mmc $sdbootdev:$partid 6000000 xen.ub; load mmc $sdbootdev:$partid 9000000 rootfs.cpio.gz.u-boot; bootm 6000000 9000000 4000000

Starting simple additional guests (PetaLinux 2016.4 or later)
If running on QEMU, we'll need to setup a port mapping for port 22 (SSH) in our VM.

XEN Hypervisor

$
0
0
...
Building a EL1 baremetal DomU guest with Xilinx SDK
Using the Xen Hypervisor with 2017.1/2017.2
...
with PetaLinux 2016.4 or newer2017.1
General information for configuring and Building Linux Dom0
General information for configuring and Building Linux DomU
Building a EL1 baremetal DomU guest with Xilinx SDK
Using the Xen Hypervisor with 2016.4 tools
...
PetaLinux 2016.4 or newer
Buildling the Xen Hypervisor with Xilnx's Yocto Flow
Using the Xen Hypervisor with 2016.3 tools

Building the Xen Hypervisor with PetaLinux 2017.3

$
0
0

Overview
The guide below shows you how to build Xen, boot Xen and then run some example configurations on ZU+. The steps below use PetaLinux and assume you have some knowledge of using PetaLinux.
Before starting you need to create a PetaLinux project. It is assumed that a default PetaLinux reference design is used unchanged in these instructions.
The default PetaLinux configuration has images ready to do boot Xen, these are the pre-built images. You can use those or you can manually edit receipes and build Xen yourself. The pre-built images can be found in this directory (inside a PetaLinux project) pre-built/linux/images/ and prefixed with "xen-". You can either use the pre-builts or follow the next section to configure and build Xen yourself. If you are using the pre-builts you can skip to the booting Xen section for your release version.
Configuring and building XEN from source using PetaLinux 2017.3
First let's enable Xen to be built by default.
$ petalinux-config -c rootfs
Now let's enable Xen:
Filesystem Packages ---> misc ---> packagegroup-petalinux-xen ---> [*] packagegroup-petalinux-xen
Now we need to change the rootFS to be an INITRD
$ petalinux-config
And change
Image Packaging Configuration ---> Root filesystem type (INITRAMFS) ---> (X) INITRD
NOTE: This means that any images built will NOT have the rootFS in the Image that is built by PetaLinux. This means you will need to edit any scripts or configs that expect the rootFS to be included. This includes the Xen configs mentioned later.
You can still use the prebuilt Image file which does still include the rootFS.
We also want to edit the device tree to build in the extra Xen related configs.
Edit this file
project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi
and add this line: /include/ "xen-overlay.dtsi".
It should look like this for harware:
/include/ "system-conf.dtsi"
/include/ "xen-overlay.dtsi"
/ {
};
or like this for QEMU:
/include/ "system-conf.dtsi"
/include/ "xen-overlay.dtsi"
/ {
cpus {
cpu@1 {
device_type = "none";
};
cpu@2 {
device_type = "none";
};
cpu@3 {
device_type = "none";
};
};
};
NOTE: There is a bug on QEMU where the CPUs running in SMP sometimes cause hangs. To avoid this we only tell Xen about a single CPU.
Also edit this file:
project-spec/meta-user/recipes-bsp/device-tree/device-tree-generation_%.bbappend
and add this line to it: file://xen-overlay.dtsi.
The file should look like this:
SRC_URI_append ="\
file://system-user.dtsi \
file://xen-overlay.dtsi \
"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
Then run petaliux-build:
$ petalinux-build
TFTP Booting Xen and Dom0 2017.3
Run Xen dom0 on QEMU:
To use the prebuilt Xen run:
$ petalinux-boot --qemu --prebuilt 2 --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=pre-built/linux/images"
To use the Xen you built yourself run:
$ petalinux-boot --qemu --u-boot
Run Xen dom0 on HW:
To use the prebuilt Xen on hardware:
$ petalinux-boot --jtag --prebuilt 2
To use the Xen you built yourself run:
$ petalinux-boot --jtag --u-boot
You should eventually see something similar to this, when you do press any key to stop the autoboot.
Hit any key to stop autoboot:
If u-boot wasn't able to get an IP address from the DHCP server you may need to manually set the serverip (it's typically 10.0.2.2 for QEMU):
$ setenv serverip 10.0.2.2
Now to download and boot Xen, if running on QEMU, use xen-qemu.dtb otherwise use xen.dtb. Example:
TFTPing Xen from pre-built images
$ tftpb 4000000 xen-qemu.dtb; tftpb 0x80000 xen-Image; tftpb 6000000 xen.ub; tftpb 0x1000000 xen-rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
TFTPing Xen from your own images
$ tftpb 4000000 system.dtb; tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
Below is an example of what you will see.
[...]
BOOTP broadcast 1
DHCP client bound to address 10.0.2.15 (2 ms)
Hit any key to stop autoboot: 0
ZynqMP>
ZynqMP> setenv serverip 10.0.2.2
ZynqMP> tftpb 4000000 system.dtb; tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
Using ethernet@ff0e0000 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'system.dtb'.
Load address: 0x4000000
Loading: #########
5.8 MiB/s
done
Bytes transferred = 42748 (a6fc hex)
Using ethernet@ff0e0000 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'Image'.
Load address: 0x80000
Loading: #################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
[...]
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
##############################
8.2 MiB/s
done
Bytes transferred = 43688238 (29aa12e hex)
## Booting kernel from Legacy Image at 06000000 ...
Image Name:
Image Type: AArch64 Linux Kernel Image (uncompressed)
Data Size: 721216 Bytes = 704.3 KiB
Load Address: 05000000
Entry Point: 05000000
Verifying Checksum ... OK
## Loading init Ramdisk from Legacy Image at 01000000 ...
Image Name: petalinux-user-image-plnx_aarch6
Image Type: AArch64 Linux RAMDisk Image (gzip compressed)
Data Size: 43688174 Bytes = 41.7 MiB
Load Address: 00000000
Entry Point: 00000000
Verifying Checksum ... OK
## Flattened Device Tree blob at 04000000
Booting using the fdt blob at 0x4000000
Loading Kernel Image ... OK
Loading Ramdisk to 7b539000, end 7dee30ee ... OK
Loading Device Tree to 0000000007ff2000, end 0000000007fff6fb ... OK
Starting kernel ...
Xen 4.8.1-pre
(XEN) Xen version 4.8.1-pre (alistai@) (aarch64-xilinx-linux-gcc (Linaro GCC 6.2-2016.11) 6.2.1 20161016) debug=n Wed Mar 22 14:11:36 MDT 2017
(XEN) Latest ChangeSet: Wed Feb 22 15:46:19 2017 +0100 git:e9e1b9b-dirty
(XEN) Processor: 410fd034: "ARM Limited", variant: 0x0, part 0xd03, rev 0x4
(XEN) 64-bit Execution:
(XEN) Processor Features: 0000000000002222 0000000000000000
(XEN) Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
(XEN) Extensions: FloatingPoint AdvancedSIMD
(XEN) Debug Features: 0000000010305006 0000000000000000
(XEN) Auxiliary Features: 0000000000000000 0000000000000000
(XEN) Memory Model Features: 0000000000001122 0000000000000000
(XEN) ISA Features: 0000000000011120 0000000000000000
(XEN) 32-bit Execution:
(XEN) Processor Features: 00001231:00011011
(XEN) Instruction Sets: AArch32 A32 Thumb Thumb-2 ThumbEE Jazelle
(XEN) Extensions: GenericTimer Security
(XEN) Debug Features: 03010066
(XEN) Auxiliary Features: 00000000
(XEN) Memory Model Features: 10101105 40000000 01260000 02102211
(XEN) ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121
(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 50000 KHz
(XEN) GICv2 initialization:
(XEN) gic_dist_addr=00000000f9010000
(XEN) gic_cpu_addr=00000000f9020000
(XEN) gic_hyp_addr=00000000f9040000
(XEN) gic_vcpu_addr=00000000f9060000
(XEN) gic_maintenance_irq=25
(XEN) GICv2: Adjusting CPU interface base to 0xf902f000
(XEN) GICv2: 192 lines, 4 cpus (IID 00000000).
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) Bringing up CPU1
(XEN) Bringing up CPU2
(XEN) Bringing up CPU3
(XEN) Brought up 4 CPUs
(XEN) P2M: 40-bit IPA with 40-bit PA
(XEN) P2M: 3 levels with order-1 root, VTCR 0x80023558
/amba@0/smmu0@0xFD800000: Decode error: write to 6c=0
(XEN) I/O virtualisation enabled
(XEN) - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
[...]
Starting syslogd/klogd: done
Starting /usr/sbin/xenstored...
Setting domain 0 name, domid and JSON config...
Done setting up Dom0
Starting xenconsoled...
Starting QEMU as disk backend for dom0
Starting domain watchdog daemon: xenwatchdogd startup
[done]
Starting tcf-agent: OK
PetaLinux 2017.1 plnx_aarch64 /dev/hvc0
INIT: Id "PS0" respawning too fast: disabled for 5 minutes
PetaLinux 2017.1 plnx_aarch64 /dev/hvc0
plnx_aarch64 login:
Login using 'root' as the username and password
SD Booting Xen and Dom0 2017.3
To boot Xen from an SD card you need to copy the following files to the boot partition of the SD card:
BOOT.bin
Image
the compiled device tree file renamed to system.dtb (xen.dtb or xen-qemu.dtb for QEMU from the pre-built images, system.dtb from a Petalinux build)
xen.ub
rootfs.cpio.gz.u-boot (Only if using initrd instead of initramfs for the rootfs)
When using the pre-built images from the BSP, copy these files from <project-dir>/pre-built/linux/images. The prebuilt images are built to support both Linux without Xen and with Xen such that some of the Xen based image file names are different than in a normal Petalinux build. The prebuilt Linux kernel image includes an initramfs rootfs. Petalinux builds (rather than prebuilt images) require an initrd rootfs such that another file for the rootfs must also be used as described below.
RootFS in Kernel (initramfs)
This method allows the use of a Linux kernel with an initramfs (such as the prebuilt image) . Boot the SD card on hardware or QEMU and stop the u-boot autoboot. At the u-boot prompt run:
mmc dev $sdbootdev && mmcinfo; load mmc $sdbootdev:$partid 4000000 system.dtb && load mmc $sdbootdev:$partid 0x80000 Image; fdt addr 4000000; mmc $sdbootdev:$partid 6000000 xen.ub; bootm 6000000 - 4000000
This would also allow a rootfs mounted on the SD card to be used. It requires that you extract the rootFS (cpio file) to the root partition on the SD card and setup the kernel to expect the rootfs on the SD card in the kernel command line.
RootFS mounted on RAM (initrd)
This method is required when using a Linux image which is initrd based and does not include a rootfs. The rootfs.cpio.gz.u-boot file will be loaded in memory from u-boot. Then boot the SD card on hardware or QEMU and stop the u-boot autoboot. At the u-boot prompt run:
mmc dev $sdbootdev && mmcinfo; load mmc $sdbootdev:$partid 4000000 system.dtb && load mmc $sdbootdev:$partid 0x80000 Image; fdt addr 4000000; load mmc $sdbootdev:$partid 6000000 xen.ub; load mmc $sdbootdev:$partid 9000000 rootfs.cpio.gz.u-boot; bootm 6000000 9000000 4000000
Starting simple additional guests (PetaLinux 2016.4 or later)
If running on QEMU, we'll need to setup a port mapping for port 22 (SSH) in our VM.
In this example, we forward the hosts port 2222 to the VM's port 22.
$ petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=images/linux,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22"
Once you hit the u-boot prompt, follow the steps in the earlier section on how to run Xen dom0.
When dom0 has finished booting, we'll need to copy a guest Image into dom0's filesystem.
We'll use the base prebuilt PetaLinux Image as our domU guest.
If running on QEMU, we use scp's -P option to connect to our hosts port 2222 where QEMU will forward the connection to the guests port 22:
To target QEMU run the following on the host:
scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 2222 images/linux/Image root@localhost:/boot/
If running on hardware run the following on the host:
scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no images/linux/Image root@<board-ip>:/boot/
If you would prefer to load DomU's kernel to the guest via SD card, you can follow the instructions in the "Starting Linux guests with Pass-through networking" section.
The xen-image-minimal rootFS includes some prepared configurations that you can use. These are located in '/etc/xen/'
$ cd /etc/xen
To start a simple guest run the following from the dom0 prompt
xl create -c example-simple.cfg
You'll see another instance of Linux booting up.
At any time you can leave the console of the guest and get back to dom0 by pressing ctrl+].
Once at the dom0 prompt you can list the guests from dom0:
xl list
To get back to the guests console:
xl console guest0
You can create further guests by for example running:
xl create example-simple.cfg name=\"guest1\"
xl create example-simple.cfg name=\"guest2\"
root@plnx_aarch64:/etc/xen# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 512 1 r----- 79.8
Domain-0 0 512 1 r----- 79.8
guest0 1 256 2 ------ 93.7
guest1 2 256 2 ------ 26.6
guest2 3 256 2 ------ 1.8
To destroy a guest:
xl destroy guest0
CPU Pinning
The following will only work on QEMU with multi-core enabled or on real HW.
When running multiple guests with multiple Virtual CPUs, Xen will schedule the various vCPUs onto real physical CPUs.
The rules and considerations taken in scheduling decisions depend on the chosen scheduler and the configuration.
To avoid having multiple vCPUs share a single pCPU, it is possible to pin a vCPU onto a pCPU and to give it exclusive access.
To create a simple guest with one Virtual CPU pinned to Physical CPU #3, you can do the following:
xl create example-simple.cfg 'name="g0"''vcpus="1"''cpus="3"'
Another way to pin virtual CPUs on to Physical CPUs is to create dedicated cpu-pools.
This has the advantage of isolating the scheduling instances.
By default a single cpu-pool named Pool-0 exists. It contains all the physical cpus.
We'll now create our pool named rt using the credit2 scheduler.
xl cpupool-create 'name="rt"''sched="credit"'
xl cpupool-cpu-remove Pool-0 3
xl cpupool-cpu-add rt 3
Now we are ready to create a guest with a single vcpu pinned to physical CPU #3.
xl create /etc/xen/example-simple.cfg 'vcpus="1"''pool="rt"''cpus="3"''name="g0"'
Starting Linux guests with Para-Virtual networking (PetaLinux 2016.4 or later)
This time we will run QEMU slightly different. We'll create two port mappings. One for dom0's SSH port and another for the Para-Virtual domU.
The default IP addresses assigned by QEMUs builtin DHCP server start from 10.0.2.15 and count upwards.
Dom0 will be assigned 10.0.2.15, the next guest 10.0.2.16 and so on.
So here's the command line that maps host port 2222 to dom0 port 22 and 2322 to domUs port 22.
petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=./images/linux/,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22,hostfwd=tcp:127.0.0.1:2322-10.0.2.16:22"
Now, follow the instructions from section 1 on how to boot Xen dom0.
Once you are at the dom0 prompt and have copied a domU image we'll need to setup the networking.
In this example, we will configure the guests to directly join the external network by means of a bridge.
First of all, we need to de-configure the default setup.
Kill the dhcp client for eth0:
# killall -9 udhcpc
List and remove existing addresses from eth0:
# ip addr show dev eth0
In our example the address is 10.0.2.15/24:
# ip addr del 10.0.2.15/24 dev eth0
Then, create the bridge and start DHCP on it for dom0:
# brctl addbr xenbr0
# brctl addif xenbr0 eth0
# /sbin/udhcpc -i xenbr0 -b
You should see something like the following:
udhcpc (v1.24.1) started
[ 186.459495] xenbr0: port 1(eth0) entered blocking state
[ 186.461194] xenbr0: port 1(eth0) entered forwarding state
Sending discover...
Sending select for 10.0.2.15...
Lease of 10.0.2.15 obtained, lease time 86400
/etc/udhcpc.d/50default: Adding DNS 10.0.2.3
Similar to before we will use the pre-defined examples in '/etc/xen/'
$ cd /etc/xen
# xl create -c example-pvnet.cfg
You should see a new linux instance boot up.
Now we'll ssh into the domU from the host running Para-Virtual networking:
ssh -p 2322 root@localhost
Starting Linux guests with Pass-through networking (PetaLinux 2017.1 or newer)
The difficulty with using pass through networking is that the steps above use Dom0 networking to load the DomU boot image onto the guest. This won't work with pass through networking as Dom0 never has any networking avaliable.
You will need to find a way to get the kernel and rootFS (the pre-built Image file) onto the guest. The steps below are used to get the Image file onto a SD card image and attach it to QEMU. Similar steps can be followed for hardware, excpet just copy the Image file to a formated SD card and insert it into the board.
Create and format the file we will be using on your host:
$ dd if=/dev/zero of=qemu_sd.img bs=128M count=1
$ mkfs.vfat -F 32 qemu_sd.img
Copy the Image file onto the card.
NOTE: We are using the pre-built Image which contains a kernel and rootFS. If you use the Image you built above then no rootFS is included. You will need to copy the rootFS onto the SD card and edit the Xen config file later to specify a rootFS.
$ mcopy -i qemu_sd.img ./pre-built/linux/images/Image ::/
Now boot QEMU with this extra option appened inside the --qemu-args: -drive file=qemu_sd.img,if=sd,format=raw,index=1
The full command should look something like this for your prebuilt images:
petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=./pre-built/linux/images/,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22,hostfwd=tcp:127.0.0.1:2322-10.0.2.16:22 -drive file=qemu_sd.img,if=sd,format=raw,index=1"
The full command should look something like this for your own images:
petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -nic -net nic -net user,tftp=./images/linux/,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22,hostfwd=tcp:127.0.0.1:2322-10.0.2.16:22 -drive file=qemu_sd.img,if=sd,format=raw,index=1"
Then boot Dom0 following the steps above, with one difference. You will need to make sure that you tell Xen about the network passthrough. To do this you will need to edit the device tree. We are going to use u-boot to edit the device tree.
After loading the device tree to memory you will need to run this: fdt addr $fdt_addr && fdt resize 128; fdt set /amba/ethernet@ff0e0000 status "disabled" && fdt set /amba/ethernet@ff0e0000 xen,passthrough "1"
The full command for booting prebuilt images you built is shown below:
$ tftpb 4000000 xen-qemu.dtb; fdt addr 4000000 && fdt resize 128; fdt set /amba/ethernet@ff0e0000 status "disabled" && fdt set /amba/ethernet@ff0e0000 xen,passthrough "1" && tftpb 0x80000 xen-Image; tftpb 6000000 xen.ub; tftpb 0x1000000 xen-rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
The full command for booting images you built is shown below:
$ tftpb 4000000 system.dtb; fdt addr 4000000 && fdt resize 128; fdt set /amba/ethernet@ff0e0000 status "disabled" && fdt set /amba/ethernet@ff0e0000 xen,passthrough "1" && tftpb 0x80000 Image; tftpb 6000000 xen.ub; tftpb 0x1000000 rootfs.cpio.gz.u-boot; bootm 6000000 0x1000000 4000000
NOTE: If running on hardware you will need to make a change to allow the DMA transactions. See here for more details: Passthrough Network Example
Once you have logged onto the system mount the SD card and copy the image.
# mount /dev/mmcblk0 /mnt/
# cp /mnt/Image /boot/
Similar to before we will use another pre-defined examples in '/etc/xen/'
$ cd /etc/xen
# xl create -c example-passnet.cfg

OpenAMP 2017.3 (Under Construction)

$
0
0
...
Setting up Remote Firmware
The user can use, for example, a similar structure as the OpenAMP RPU applications created in the Building Remote Applications sections of UG1186 .
Using these sample applications as a model, edit the rsc_table.c file and modify the RSC_VDEV entry in the resources structure, to look as follows:
{ RSC_VDEV, VIRTIO_ID_RPMSG_, 0, RPMSG_IPU_C0_FEATURES, 0, 0, 0,
...
}
and change to:
{ RSC_VDEV, VIRTIO_ID_RPMSG_, 0, RPMSG_IPU_C0_FEATURES, 0, 0, VIRTIO_CONFIG_STATUS_DRIVER_OK,
...
}
In this example, you replaced 0 with VIRTIO_CONFIG_STATUS_DRIVER_OK. Without this change, the RPU firmware will not finish remoteproc/rpmsg initialization until the status bit is set, which indicates that the RPMsg driver is ready.
In addition to changing the resource table, the
The user will
...
This change will beis required in
To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
...
#power-domain-cells = <0x0>;
pd-id = <0x7>;
};
pd_r5_1: pd_r5_1 {
#power-domain-cells = <0x0>;
pd-id = <0x8>;

};
pd_tcm_0_a: pd_tcm_0_a {
...
#power-domain-cells = <0x0>;
pd-id = <0x10>;
};
pd_tcm_1_a: pd_tcm_1_a {
#power-domain-cells = <0x0>;
pd-id = <0x11>;
};
pd_tcm_1_b: pd_tcm_1_b {
#power-domain-cells = <0x0>;
pd-id = <0x12>;

};
};
...
reg = <0x0 0xFFE20000 0x0 0x10000>;
pd-handle = <&pd_tcm_0_b>;
};
r5_1_tcm_a: tcm@ffe90000 {
compatible = "mmio-sram";
reg = <0x0 0xFFE90000 0x0 0x10000>;
pd-handle = <&pd_tcm_1_a>;
};
r5_1_tcm_b: tcm@ffeb0000 {
compatible = "mmio-sram";
reg = <0x0 0xFFEB0000 0x0 0x10000>;
pd-handle = <&pd_tcm_1_b>;

};
elf_ddr_0: ddr@3ed00000 {
...
test_r50: zynqmp_r5_rproc@0 {
compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
...
<0x0 0xff9a0100 0x0 0x100>, <0x0 0xff350000 0x0 0x100>,
...
= "rpu_base","ipi","rpu_glbl_base";
dma-ranges;
core_conf = "split0";
sram_0srams = <&r5_0_tcm_a>;
sram_1 = <&r5_0_tcm_b>;
sram_2 = <&elf_ddr_0>;
<&r5_0_tcm_a &r5_0_tcm_b &elf_ddr_0>;
pd-handle = <&pd_r5_0>;
interrupt-parent = <&gic>;
interrupts = <0 30 4>;

} ;
/* UIO device node for vring device memory */
...
3. Then build the petalinux project.
petalinux-build
...
start the applicationfirmware step-by-step:
Log into Linux, then start RPU firmware, e.g:
echo <fw_name> /sys/class/remoteproc/remoteproc0/firmware
...
};
Configuring the Petalinux project
In addition, theThe OpenAMP applications use Libmetal to access shared memory. Thus libmetal package in
...
petalinux project shouldmust be enabled.
petalinux-config -c rootfs
Filesystem Packages --->
...
ZynqMP Linux Master running on APU Linux communicate with RPU via Shared Memory; FSBL load APU and RPU
Overview
...
to the previous section here for communication.
Examples for using shared memory via Libmetal for varying platforms can be found here:
Linux

XEN Hypervisor

$
0
0
...
{Xen4_27Mar.JPG}
Using the Xen Hypervisor with 2017.3
Building the Xen Hypervisor with PetaLinux 2017.3
General information for configuring and Building Linux Dom0
General information for configuring and Building Linux DomU

OpenAMP 2017.3 (Under Construction)

$
0
0
...
The echo-test application sends packets from Linux running on quad-core Cortex-A53 to a single cortex-R5 running FreeRTOS which send them back.
Extract files BOOT.BIN, image.ub and openamp.dtb files from a pre-built Petalinux BSP tarball to sdcard
...
tar xvf xilinx-zcu102-v2017.2-final.bspxilinx-zcu102-v2017.3-final.bsp --strip-components=4 --wildcards
host shell$ cp BOOT.BIN image.ub openamp.dtb <your sd card>
Note: Alternatively, if you already created a Petalinux project with a provided BSP for your board, pre-built images can also be found under the <your project>/pre-built/linux/images/ directory.

OpenAMP 2017.3 (Under Construction)

$
0
0
...
Setting up Remote Firmware
The user can use, for example, a similar structure as the OpenAMP RPU applications created in the Building Remote Applications sections of UG1186 .
The user will also need to speciy a different IPI in the remoteproc device tree node. Otherwise, it will conflict with the IPI used in the Linux userspace RPMsg application. This change is required in 2017.3
To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
...
ZynqMP Linux Master running on APU Linux communicate with RPU via Shared Memory
Overview
...
between processors. Both theThe use of IPI and Shared memory areis documented in the example at the end of this section.section titled
"ZynqMP Linux Master running on APU Linux loads and runs arbitrary RPU Firmware; APU communicate with RPU via RPMsg in Userspace".

Device Tree Settings for Linux
To make the shared memory device accessible to Linux running on APU, there must be some modifications in the device tree.

OpenAMP 2017.3 (Under Construction)

$
0
0
...
echo <name of firmware> > /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
ZynqMP Linux Master running on APUloads RPU, Linux loads and runs arbitrary RPU Firmware; APU communicate withOpenAMP Application talks to RPU via RPMsg in UserspaceOpenAMP Application
Overview
...
a Linux + Bare-metal,RTOS, etc.APUon APU+ Bare-metal/RTOS on RPU. APU Linux using remoteproc will use remoteproc to load the
...
RPU via UserspaceOpenAMP library implementation of RPMsg.
Setting up Remote Firmware
The user can use, for example, a similar structure as the OpenAMP RPU applications created in the Building Remote Applications sections of UG1186 .
To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
1. AsAs directed in
...
in /lib/firmware.
If
If creating a
...
below (0x3ed00000).
2. Modify

Modify
the device
/include/ "system-conf.dtsi"
/{
...
echo <fw_name> /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
ZynqMP Linux Master running on APU
...
Shared Memory without OpenAMP
Overview
...
the Linux master and RPU
...
Shared Memory. Additonally, IPI can
...
section titled
"ZynqMP Linux Master running on APU Linux loads and runs arbitrary RPU Firmware; APU communicate with RPU via RPMsg in Userspace".
Device Tree Settings for Linux
...
static inline void metal_io_write(struct metal_io_region *io, unsigned long offset, uint64_t value, memory_order order, int width);
An example showing the use of these functions in Linux userspace can be found here. At the link are some examples showing the use of reading from, and writing to shared memory as well as initialization and cleanup of Libmetal resources.
ZynqMP Linux Master running on APU Linux
...
via Shared Memory; FSBL load APU and RPUMemory
Overview
The information below is intended to provide guidance to users who wish to set up a Linux + Bare-metal,RTOS, etc. We make the assumption that the Linux master and RPU will communicate via Shared Memory. Please refer to the previous section for communication. Similar to the previous section, we make the assumption that the device tree, Linux Master C code and firmware C code are set up to use Libmetal on either Linux userspace, Baremetal or FreeRTOS.

OpenAMP 2017.3 (Under Construction)

$
0
0
...
{libmetal-doc-20170418.pdf}
URLs to source code:
Xilinx Openamp and Libmetal related code:
The following location provide access to the code:
https://github.com/Xilinx/open-amp/tree/xilinx-v2017.2
OpenAMP Library and Demonstration code
https://github.com/Xilinx/libmetal/tree/xilinx-v2017.2
Libmetal Library and Demonstration code
https://github.com/Xilinx/meta-openamp/tree/rel-v2017.2
Yocto recipe to build OpenAMP and Libmetal
https://github.com/Xilinx/linux-xlnx/tree/xilinx-v2017.2
Xilinx SDK template (which uses https://github.com/Xilinx/embeddedsw/tree/xilinx-v2017.2)
e.g for main components:
Demo Applications:
https://github.com/Xilinx/meta-openamp/tree/rel-v2017.2/recipes-openamp/rpmsg-examples
https://github.com/Xilinx/open-amp/tree/xilinx-v2017.2/apps
https://github.com/Xilinx/libmetal/tree/xilinx-v2017.2/examples/system
Xilinx SDK template for RPU firmware (echo test, matrix multiplication
version of Linux kernel
https://github.com/Xilinx/embeddedsw/tree/xilinx-v2017.2)
RPU baremetal
and rpc demo)
Libraries:
https://github.com/Xilinx/open-amp/tree/xilinx-v2017.2/lib
https://github.com/Xilinx/libmetal/tree/xilinx-v2017.2/lib
LKMs
https://github.com/Xilinx/meta-openamp/tree/rel-v2017.2/recipes-kernel/rpmsg-user-module/files
https://github.com/Xilinx/meta-openamp/tree/rel-v2017.2/recipes-kernel/rpmsg-proxy-module/files
https://github.com/Xilinx/linux-xlnx/tree/xilinx-v2017.2/drivers/rpmsg
https://github.com/Xilinx/linux-xlnx/tree/xilinx-v2017.2/drivers/remoteproc
OpenAMP framework OSS:
https://github.com/OpenAMP
FAQ:
Is there a way to reduce Petalinux build time with OpenAMP?
To reduce extra (re)-compilation time for the remote processor firmware built with Petalinux and to preserve the
FreeRTOS source code used in the temporary build directory:
Edit your <petalinux-project>/project-spec/meta-user/conf/petalinuxbsp.conf file
XSDK and add:
RM_WORK_EXCLUDE += "openamp-fw-echo-testd openamp-fw-mat-muld openamp-fw-rpc-demo"
Remote firmware failed to boot and now I see an error saying "failed to declare rproc mem as DMA mem", why?
This happens after an invalid image is provided to remoteproc and this one exited before freeing all allocated memory, preventing further allocation on next run.
In this situation, in order to load a new openamp image you need to reboot Linux.
The patch below will take care of fixing remoteproc so that you are not forced to reboot Linux.
{0001-remoteproc-resource_cleanup-releases-DMA-declared-me.patch}
Note this however doesn't fix the root cause of the issue, which is probably the footprint of the elf image provided to remotproc doesn't match the allocated memory in the DTS.
XSCT
Additional examples:
ZynqMP Linux Master running on APU with RPMsg in kernel space and 2 RPU slaves.
...
Remote processor applications (echo_test, matrix multiply, rpc demo) code is by default set to run with RPU 0 and need to be slightly modified for RPU-1.
When RPU-1 is selected in Xilinx SDK, the code generated need to be modified as follow:
...
IPI_IRQ_VECT_ID value 65 byto 66
Edit
...
IPI_BASE_ADDR value 0xFF310000 byto 0xFF320000
Check that the application linker script (lscript.ld) addresses match
...
memory sections.
Check that the appplication rsc_table.c address for RSC_RPROC_MEM carveout is not overlapping the linker script addresses.

Example: Running two echo_test application concurrently on Linux, each communicating to one RPU
Use Petalinux to build/boot your target and then login to Linux console serial port.
...
In the above case /dev/rpmsg0 is used for RPU-0.
If however RPU-1 was started first, it would have been associated with /dev/rpmsg0 and RPU-0 would have been using /dev/rpmsg1.
...
space and only one RPU slave or RPU in lockstep.slave.
When running with RPU in split mode and only one RPU is an OpenAMP slave, the second RPU can still run another non-openamp application.
RPU-0 slave:
...
RPU-1 slave:
Proceed as for the two RPU configuration above and edit your device tree to remove the unused 'zynmp_r5_rproc' entry and associated nodes (tcm, pd,...) that may not be needed any more.
RPU in lockstep:
When running with RPU in lockstep mode, the setup is almost as if running on RPU-0, however the device tree is slightly different, please see openamp-overlay-lockstep.dtsi
Note: Depending on your BSP, you may need to update the Vivado design to add RPU to the isolation configuration, mark it non-secure, and assign 4 TCMs.
ZynqMP
ZynqMP Linux Master
...
Linux loads arbitraryOpenAMP RPU Firmware
Overview
...
a Linux on APU + Bare-metal,RTOS, etc.ThisBare-metal/RTOS on RPU. This configuration relies
To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
1. AsAs directed in User Guide 1186 Chapter 3,UG 1186, create an
...
in /lib/firmware.
If creating

To create
a newtemplate for a yocto recipe to install the firmware, do the following:
Create yocto
application withinside of Petalinux project
petalinux-create -t apps --template install -n <app_name> --enable
copy firmware (.elf file) into project-spec/meta-user/recipes-apps/<app_name>/files/ directory
Modify
the SDK, you may needproject-spec/meta-user/recipes-apps/<app_name>/<app_name>.bb to updateinstall the linker script DDR address to matchremote processor firmware in the RootFS as follows:
SUMMARY = "Simple test application"SECTION = "PETALINUX/apps"LICENSE = "MIT"LIC_FILES_CHKSUM ="file:${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"SRC_URI = "file:<myfirmware>"S = "${WORKDIR}"INSANE_SKIP_${PN} = "arch"do_install() { install -d ${D}/lib/firmware install -m 0644 ${S}/<myfirmware> ${D}/lib/firmware/<myfirmware>}FILES_${PN} = "/lib/firmware/<myfirmware>
Build Linux images with the "petalinux-build" command inside
the DTS address below (0x3ed00000).PetaLinux project.
2. Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
/ {
...
echo <name of firmware> > /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
5. Run Linux application
6. Stop firmware
echo stop > /sys/class/remoteproc/remoteproc0/state

ZynqMP Linux loads RPU, Linux OpenAMP Application talks to RPU OpenAMP Application
Overview
...
To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
...
directed in User Guide 1186 Chapter 3,UG 1186, create an applicationa yocto recipe inside of
...
in /lib/firmware. If creating a new application with the SDK, you may needRefer to update the linker script DDR addressprevious example:ZynqMP Linux Master running on APU Linux loads OpenAMP RPU Firmware for a guide on how to match the DTS address below (0x3ed00000).create such a yocto recipe.
Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
/include/ "system-conf.dtsi"
...
petalinux-config -c rootfs
2. Enable the required rootfs packages for the application. If you are running the sample applications from UG1186, the packages would be enabled by the following:
Filesystem Packages --->
misc --->
packagegroup-petalinux-openamp --->
[*] packagegroup-petalinux-openamp

3. Then build the petalinux project.
petalinux-build
...
Configuring the Petalinux project
The OpenAMP applications use Libmetal to access shared memory. Thus libmetal package in your petalinux project must be enabled. This package can be enabled by going into the rootfs by using the petalinux-config utility.
run:
petalinux-config -c rootfs
Filesystemand then in the utility enable the following packages:
Filesystem
Packages --->
libs
libmetal
[*]

--> libs
-->
libmetal
--> [ * ] libmetal
--> openamp
--> [ * ] open-amp
--> misc
--> openamp-fw-echo-testd
--> [ * ] openamp-fw-echo-testd
--> openamp-fw-mat-muld
--> [ * ] openamp-fw-mat-muld
--> openamp-fw-rpc-demod
--> [ * ] openamp-fw-rpc-demod
--> rpmsg-echo-test
--> [ * ] rpmsg-echo-test
--> rpmsg-mat-mul
--> [ * ] rpmsg-mat-mul
--> rpmsg-proxy-app
--> [ * ] rpmsg-proxy-app
--> rpmsg-proxy-module
--> [ * ] rpmsg-proxy-module
--> rpmsg-user-module
--> [ * ] rpmsg-user-module

Communicating via Shared memory
The below information is constructed with the assumptoin that the shared memory node is visible in Linux userspace.
Using the Libmetal API, we can read from and write to shared memory with the following functions:
static inline uint64_t metal_io_read(struct metal_io_region *io, unsigned long offset, memory_order order, int width);
int metal_io_block_read(struct metal_io_regoin *io, unsigned long offset, void * restrict dst, int len)l;
and
static inline void metal_io_write(struct metal_io_region *io, unsigned long offset, uint64_t value, memory_order order, int width);
int metal_io_block_write(struct metal_io_region *io, unsigned long offset, const void *restrict src, int len);
An example showing the use of these functions in Linux userspace can be found here. At the link are some examples showing the use of reading from, and writing to shared memory as well as initialization and cleanup of Libmetal resources.
ZynqMP APU Linux communicate with RPU via Shared Memory

OpenAMP 2017.3 (Under Construction)

$
0
0
...
copy firmware (.elf file) into project-spec/meta-user/recipes-apps/<app_name>/files/ directory
Modify the project-spec/meta-user/recipes-apps/<app_name>/<app_name>.bb to install the remote processor firmware in the RootFS as follows:
...
= "arch"do_install() { install{install -d ${D}/lib/firmware install${D}/lib/firmwareinstall -m 0644
Build Linux images with the "petalinux-build" command inside the PetaLinux project.
2. Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
...
petalinux-config -c rootfs
2. Enable the required rootfs packages for the application. If you are running the sample applications from UG1186, the packages would be enabled by the following:
Filesystem Packages --> libs --> libmetal --> [ * ] libmetal --> openamp --> [ * ] open-amp --> misc --> openamp-fw-echo-testd --> [ * ] openamp-fw-echo-testd --> openamp-fw-mat-muld --> [ * ] openamp-fw-mat-muld --> openamp-fw-rpc-demod --> [ * ] openamp-fw-rpc-demod --> rpmsg-echo-test --> [ * ] rpmsg-echo-test --> rpmsg-mat-mul --> [ * ] rpmsg-mat-mul --> rpmsg-proxy-app --> [ * ] rpmsg-proxy-app --> rpmsg-proxy-module --> [ * ] rpmsg-proxy-module --> rpmsg-user-module --> [ * ] rpmsg-user-module
3. Then build the petalinux project.
petalinux-build
...
echo <fw_name> /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
Run the Linux Application
Stop firwmare
echo stop > /sys/class/remoteproc/remoteproc0/state

ZynqMP on APU Linux communicate with RPU via Shared Memory without OpenAMP
Overview
...
int metal_io_block_write(struct metal_io_region *io, unsigned long offset, const void *restrict src, int len);
An example showing the use of these functions in Linux userspace can be found here. At the link are some examples showing the use of reading from, and writing to shared memory as well as initialization and cleanup of Libmetal resources.
ZynqMP APU Linux communicate with RPU via Shared Memory
Overview
The information below is intended
How to provide guidance to users who wish to set up a Linux + Bare-metal,RTOS, etc. We make the assumption that the Linux master and RPU will communicate via Shared Memory. Please refer to the previous section for communication. Similar to the previous section, we make the assumption that the device tree, Linux Master C code and firmware C code are set up to use Libmetal on either Linux userspace, Baremetal or FreeRTOS.
Examples for using shared memory via Libmetal for varying platforms can be found here:
Linux
FreeRTOS
Baremetal
Generating
Generate BOOT.BIN
This section assumes that your Petalinux project has already run Petalinux-build to build all the necessary components for your embedded Linux solution in addition to firmware to run on an RPU.
We will useUse Petalinux tools
Below is a sample bootgen.bif file that you can create or modify in the top level directory of your Petalinux project that you can use to help construct the BOOT.BIN:
the_ROM_image:

Build Device Tree Blob

$
0
0
...
Generate DTS/DTSI files to folder my_dts where output DTS/DTSI files will be generated
generate_target -dir my_dts
CompilingGenerate a Board file Device Tree Source (.dts/.dtsi) files on command line using HSM/HSI
Source Xilinx design tools
Run HSM or HSI (Vivado 2014.4 onwards)
> hsm
> [[code]]
# Open HDF file
open_hw_design <design_name>.hdf
# Set repository path (clone done in previous step in SDK) (On Windows use this format set_repo_path {C:\device-tree-xlnx})
set_repo_path <path to device-tree-xlnx repository>
# Create SW design and setup CPU (for ZynqMP psu_cortexa53_0, for Zynq ps7_cortexa9_0, for Microblaze microblaze_0)
create_sw_design device-tree -os device_tree -proc ps7_cortexa9_0
# Provide the Board name as below ex: board zcu102-rev1.0
set_property CONFIG.periph_type_overrides "{BOARD zcu102-rev1.0}" [get_os]
7. Generate DTS/DTSI files to folder my_dts where output DTS/DTSI files will be generated
generate_target -dir my_dts
8. In the generated my_dts folder we should see zcu102-rev1.0.dtsi file
===Compiling
a Device
...
from the DTS
A
DTS===
A
utility called device//device tree compilercompiler// (DTC) is
...
source directory. linux-xlnx/scripts/dtc///linux-xlnx/scripts/dtc/// contains the
Once the DTC is available, the tool may be invoked to generate the DTB:
./scripts/dtc/dtc -I dts -O dtb -o <devicetree name>.dtb <devicetree name>.dts
DTC may also be used to convert a DTB back into a DTS:
./scripts/dtc/dtc -I dtb -O dts -o <devicetree name>.dts <devicetree name>.dtb
Alternative:__Alternative: For ARM onlyonly__
In the
...
files from linux-xlnx/arch/arm/boot/dts///linux-xlnx/arch/arm/boot/dts/// into DTB
make ARCH=arm dtbs
...
located in linux-xlnx/arch/arm/boot/dts/.//linux-xlnx/arch/arm/boot/dts/.//
A single linux-xlnx/arch/arm/boot/dts/<devicetree name>.dts//linux-xlnx/arch/arm/boot/dts/<devicetree name>.dts// may be compiled into linux-xlnx/arch/arm/boot/dts/<devicetree name>.dtb://linux-xlnx/arch/arm/boot/dts/<devicetree name>.dtb//:
make ARCH=arm <devicetree name>.dtb
--NOTE!**<span style="color: #eb122c;">--NOTE! THIS IS
...
NOT USE THIS.--
Creating
THIS.--</span>**
===Creating
a Device
...
2014.1 (or earlier)
Open
earlier)===
# Open
the hardware
...
in XPS.
Export

# Export
the hardware
...
to SDK:
> [[code]]

XPS Menu: Project > Export Hardware Design to SDK... > Export & Launch SDK
The Device Tree Generator Git repository needs to be cloned from the Xilinx. See the Fetch Sources page for more information on Git. Note that there are two repos for differing SDK versions below.

Build Device Tree Blob

$
0
0
...
Source Xilinx design tools
Run HSM or HSI (Vivado 2014.4 onwards)
> hsm
> [[code]]
#
hsi
3.
Open HDF
open_hw_design <design_name>.hdf
#4. Set repository
set_repo_path <path to device-tree-xlnx repository>
#5. Create SW
create_sw_design device-tree -os device_tree -proc ps7_cortexa9_0
# Provide the Board name as below ex: board zcu102-rev1.0 6. set_property CONFIG.periph_type_overrides "{BOARD zcu102-rev1.0}" [get_os]
set_property CONFIG.periph_type_overrides "{BOARD zcu102-rev1.0}" [get_os]
7. Generate DTS/DTSI files to folder my_dts where output DTS/DTSI files will be generated
generate_target -dir my_dts
...
my_dts folder we should see zcu102-rev1.0.dtsizcu102.rev1.0.dtsi file
===Compiling
should be present.
Compiling
a Device
...
from the DTS===
A
DTS
A
utility called //devicedevice tree compiler//compiler (DTC) is
...
source directory. //linux-xlnx/scripts/dtc///linux-xlnx/scripts/dtc/ contains the
Once the DTC is available, the tool may be invoked to generate the DTB:
./scripts/dtc/dtc -I dts -O dtb -o <devicetree name>.dtb <devicetree name>.dts
DTC may also be used to convert a DTB back into a DTS:
./scripts/dtc/dtc -I dtb -O dts -o <devicetree name>.dts <devicetree name>.dtb
__Alternative:Alternative: For ARM only__only
In the
...
files from //linux-xlnx/arch/arm/boot/dts///linux-xlnx/arch/arm/boot/dts/ into DTB
make ARCH=arm dtbs
...
located in //linux-xlnx/arch/arm/boot/dts/.//linux-xlnx/arch/arm/boot/dts/.
A single //linux-xlnx/arch/arm/boot/dts/<devicetree name>.dts//linux-xlnx/arch/arm/boot/dts/<devicetree name>.dts may be compiled into //linux-xlnx/arch/arm/boot/dts/<devicetree name>.dtb//:linux-xlnx/arch/arm/boot/dts/<devicetree name>.dtb:
make ARCH=arm <devicetree name>.dtb
**<span style="color: #eb122c;">--NOTE!--NOTE! THIS IS
...
NOT USE THIS.--</span>**
===Creating
THIS.--
Creating
a Device
...
2014.1 (or earlier)===
# Open
earlier)
Open
the hardware
...
in XPS.
# Export

Export
the hardware
...
to SDK:
XPS Menu: Project > Export Hardware Design to SDK... > Export & Launch SDK
# The Device Tree Generator Git repository needs to be cloned from the Xilinx. See the [[www/Fetch Sources|Fetch Sources]] page for more information on Git. Note that there are two repos for differing SDK versions below.

> [[code]]
XPS Menu: Project > Export Hardware Design to SDK... > Export & Launch SDK
The Device Tree Generator Git repository needs to be cloned from the Xilinx. See the Fetch Sources page for more information on Git. Note that there are two repos for differing SDK versions below.

git clone git://github.com/Xilinx/device-tree.git bsp/device-tree_v0_00_x
Note: In order for SDK to be able to import the Device Tree Generator correctly, the file and directory hierarchy needs to look like:

Build Device Tree Blob

$
0
0
...
7. Generate DTS/DTSI files to folder my_dts where output DTS/DTSI files will be generated
generate_target -dir my_dts
...
my_dts folder zcu102.rev1.0.dtsizcu102-rev1.0.dtsi file should
Compiling a Device Tree Blob (.dtb) file from the DTS
A utility called device tree compiler (DTC) is used to compile the DTS file into a DTB file. DTC is part of the Linux source directory. linux-xlnx/scripts/dtc/ contains the source code for DTC and needs to be compiled in order to be used. One way to compile the DTC is to build the Linux tree. The DTC might also be available through your OS's package manager.
...
# The Device Tree Generator Git repository needs to be cloned from the Xilinx. See the [[www/Fetch Sources|Fetch Sources]] page for more information on Git. Note that there are two repos for differing SDK versions below.
> [[code]]
git> git clone git://github.com/Xilinx/device-tree.git bsp/device-tree_v0_00_x
Note:

> [[code]]
# Note:
In order
...
look like:
<bsp repo>/bsp/device-tree_v0_00_x/data/device-tree_v2_1_0.mld
<bsp repo>/bsp/device-tree_v0_00_x/data/device-tree_v2_1_0.tcl
Add

> <bsp repo>/bsp/device-tree//_v0_00_x/data/device-tree_v2_1_0.mld//
> <bsp repo>/bsp/device-tree//_v0_00_x/////data/device-tree_v2_1_0.tcl//
# Add
the BSP
...
git area):
> [[code]]

SDK Menu: Xilinx Tools > Repositories > New... (<bsp repo>) > OK
Create a Device Tree Board Support Package (BSP):
Viewing all 11776 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>