Hi All,
I have a requirement to target a Xilinx RFSoC Gen 3 device and as such need to use Vivado 2020.1 or later, to according to what I’ve read 2021.1 “should” be OK.
When preparing a new Podman container which targets Vivado 2021.1, the following error is flagged after running “RUN source cdk/opencpi-setup.sh -s && ocpiadmin install platform xsim”:
STEP 18: RUN source cdk/opencpi-setup.sh -s && ocpiadmin install platform xsim
The HDL platform “xsim” is already defined in this installation, in /opencpi/project-registry/ocpi.core/exports/hdl/platforms/xsim.
============== For library bsv:
Building the bsv library for xsim (target-xsim/ocpi_core_bsv) 0:()
Error: xsim failed(1). See target-xsim/bsv-xsim.out. 0:00.00 at 08:37:18
make[2]: *** [target-xsim/ocpi_core_bsv] Error 1
make[1]: *** [bsv] Error 2
make: *** [hdlprimitives] Error 2
Error: error building at STEP “RUN source cdk/opencpi-setup.sh -s && ocpiadmin install platform xsim”: error while running runtime: exit status 1
When I target 2020.2 the install xsim platform command completes without error.
In addition, if I launch the container prior to installing XSIM, I found I was able to complete the 2021.1 XSIM installation process.
I’ve had a look at environment variables at couldn’t see any differences between the container “build” environment and the “running” container environment. I subsequently added the following just to make sure…
ENV OCPI_XILINX_VERSION=2021.1
ENV OCPI_XILINX_LICENSE_FI9LE=/opencpi/Xilinx.lic
ENV OCPI_TOOL_PLATFORM=centos7
ENV OCPI_PREREQUISITES_DIR=/opencpi/prerequisites
ENV OCPI_TOOL_OS_VERSION=c7
ENV OCPI_CDK_DIR=/opencpi/cdk
ENV OCPI_TOOLS_OS=linux
ENV OCPI_TOOL_PLATFORM=/opencpi/project-registry/ocpi.core/exports/rcc/platforms/centos7
ENV OCPI_TOOL_ARCH=x86_64
ENV OCPI_TOOL_DIR=centos7
I saw the same error message for OpenCPI v2.4.6 and v2.4.7.
Would be happy to stick with 2020.2 but now have other issues relating to the installation ocpiadmin install platform xilinx20_2_aarch32 command (to be discussed separately!)
Has anyone seen similar?
Hi Tremayne,
I’d really need to see the contents of the build log for that bsv
HDL library to get any idea as to what is going on.
I’ve definitely compiled that library for XSim 2021.1 before, on a non-virtual Arch Linux install, so it should work.
I actually might be attempting to compile OpenCPI against Vivado 2023.1 in the coming week or so. If I see anything similar, I’ll report back.
Hi.
We’re currently working on mainlining Xilinx RFSoC Gen3 support in OpenCPI.
Right now there’s a working branch that uses Xilinx 2021.1 tools. It includes xilinx21_1_aarch64 RCC platform. You can find it here: Files · rfdc · Geon Technologies / OpenCPI / OpenCPI · GitLab
We’ve built xsim platform against it and it should work. Only caveat I see is that were using Ubuntu20.04 host operating system vs CentOS7.
Hi Both,
I suspect a native install will be fine (ie non-containerised environment), as I was able to do the 21.1 XSIM install on a running container outside the container “build” environment.
Ubuntu may be worth a shot.
I’ll post the next issue I have about adding xilinx21_1_aarch64 or xilinx20_4_aarch64 which both fail…
I’ll also have a look at Files · rfdc · Geon Technologies / OpenCPI / OpenCPI · GitLab
Hi Both,
The Files · rfdc · Geon Technologies / OpenCPI / OpenCPI · GitLab branch gives me the same error message as v2.4.7 when running the following build command:
RUN source cdk/opencpi-setup.sh -s && ocpiadmin install platform xsim
I’ll now give Ubuntu a try…
I don’t have a copy of 2021.1 installed, but I do have my recently installed 2023.1.
I used my ocpicontainer
repository, and built an image for Xilinx 2023.1 on CentOS 7:
make build-centos7-opencpi-release-2.4.7-xilinx-2023.1-testbias
This built successfully (no errors), although the contents of test.output
did not look correct (I’ll be investigating this further).
Are you able to get the file projects/core/hdl/primitives/bsv/target-xsim/bsv-xsim.out
out of the container?
Without seeing what the specific error is I’d struggle to suggest what’s going on.
I’m not sure how to catch and copy this file from the container as it bombs out before caching a Podman image.
You could reduce the Dockerfile
down to just the commands that work.
Then run the image when it finishes build (podman container run -it <image-name>
should do it), and use the failing command in the interactive image.
That way, the image will be open when it fails and the file can be cat
’d.
I’ve tried that already, but when when running the working image in interactive mode, the XSIM installation works fine. That’s why I took a closer look at the environment variables (which seemed fine).
So to clarify, if you open an interactive CentOS 7 container, with nothing installed in it, and manually run all of the install commands, it works?
That is beyond strange.
Another option would be to edit your build command to always cat
out that log file after it runs, in the event that the command fails:
RUN \
source cdk/opencpi-setup.sh -s \
&& ocpiadmin install platform xsim \
|| cat $OCPI_ROOT_DIR/projects/core/hdl/primitives/bsv/target-xsim/bsv-xsim.out
This feels like an environment setup issue, especially given that bsv
is the very first HDL that is compiled.
Think you’re right, it looks like there’s a permissions issue.
Firstly, I can confirm that the /tools/Xilinx/ folder has full 777 permissions set:
[trem@localhost /]$ ls -l /tools/Xilinx/Vivado/2021.1/.settings64-Vivado.sh
-rwxrwxrwx. 1 root root 371 Apr 25 02:24 /tools/Xilinx/Vivado/2021.1/.settings64-Vivado.sh
[trem@localhost /]$ ls -l tools
total 0
drwxrwxrwx. 9 root root 133 Apr 26 05:45 Xilinx
[trem@localhost /]$ . /tools/Xilinx/Vivado/2021.1/.settings64-Vivado.sh
But, when I run your suggested script, we get the following:
STEP 18: RUN source cdk/opencpi-setup.sh -s && ocpiadmin install platform xsim || cat $OCPI_ROOT_DIR/projects/core/hdl/primitives/bsv/target-xsim/bsv-xsim.out
The HDL platform “xsim” is already defined in this installation, in /opencpi/project-registry/ocpi.core/exports/hdl/platforms/xsim.
============== For library bsv:
Building the bsv library for xsim (target-xsim/ocpi_core_bsv) 0:()
Error: xsim failed(1). See target-xsim/bsv-xsim.out. 0:00.00 at 12:34:53
make[2]: *** [target-xsim/ocpi_core_bsv] Error 1
make[1]: *** [bsv] Error 2
make: *** [hdlprimitives] Error 2
Commands to execute tool:
set -e;
set -e;
. /tools/Xilinx/Vivado/2021.1/.settings64-Vivado.sh ;
export XILINX_LOCAL_USER_DATA=no;
export LM_LICENSE_FILE=/opencpi/Xilinx.lic;
(echo verilog work /tools/Xilinx/Vivado/2021.1/data/verilog/src/glbl.v > bsv.prj && xvhdl -v 2 -work ocpi_core_bsv=ocpi_core_bsv …/bsv_pkg.vhd -prj bsv.prj && xvlog -v 2 -work ocpi_core_bsv=ocpi_core_bsv …/SyncReset0.v …/RevertReg.v …/ClockInverter.v …/FIFO2.v …/mkLCDController.v …/SyncFIFO.v …/ClockDiv.v …/FIFO1.v …/BypassWire.v …/mkCRC32.v …/BsvCounter.v …/SizedFIFO.v …/SyncRegister.v …/TriState.v …/ResetEither.v …/BRAM1Load.v …/SyncHandshake.v …/FIFO20.v …/FIFO2X.v …/ResetInverter.v …/MakeReset0.v …/SyncResetA.v …/ResetToBool.v …/SyncPulse.v …/BypassCrossingWire.v …/FIFO10.v …/MakeResetA.v …/SyncBit.v -prj bsv.prj ) || (rm -r -f bsv && exit 1)
Output from executing commands above:
bash: /tools/Xilinx/Vivado/2021.1/.settings64-Vivado.sh: Permission denied
0:00.00 at 12:34:53
d676088e4d4fea31d7b49b28147b3b5c0472d66fb7e3d421e7da5e7295bff1b4
What does your --mount
flag look like to podman?
Mine from ocpicontainer
looks like this:
--mount type=bind,source=$(from_xilinx_dir),target=$(to_xilinx_dir),readonly \
Here are what my permissions look like (same for settings64.sh
):
.rwxr-xr-x 365 user 4 Apr 16:24 .settings64-Vivado.sh
Does your container make a user (so that it runs as non-root)? Could it be that because your settings file is owned by root it isn’t happy?
You could chown
the whole /tools/Xilinx
directory to your user:
chown -R <user>:<user> /tools/Xilinx
I use the following podman build command with the volume switch (I used the same for 19.2 and 20.2 which both work):
podman build -v /tools/Xilinx:/tools/Xilinx:rw -t opencpi_xsim_247 -f xsim247.Containerfile .
My container only uses root, so guess chown wouldn’t make a difference?
My bad, I copied the wrong lines from mine. My build command uses the same --volume
flag you do, but I mount ro
not rw
.
What does your container think the permissions are?
You could add some new RUN
statements before the failing one. Could also try manually source the settings and see what happens:
RUN ls -la /tools/Xilinx/Vivado/2021.1
RUN . /tools/Xilinx/Vivado/2021.1/settings64.sh
Even though the Podman build command was running as “root” and the Xilinx install folder permissions had been set to 777, the Podman build process didn’t have access to the shared Xilinx drive.
This was fixed by adding the following option to the build command:
–security-opt label=disable
with the full build command looking like:
podman build --format docker --security-opt label=disable -v /tools/Xilinx:/tools/Xilinx:rw -t opencpi_ub_xsim_247 -f ub_xsim247.Containerfile .
Another option was to use “z” instead of “rw” for the -v option.
Not sure why 2021.1 is different to 2020.2 or 2019.2.
Thanks to @waltersdom for all the help with this.
1 Like