Is PCIe still a valid choice for a new platform?

We’re considering a project where the connection between a RCC container and a HDL one would be a PCIe backplane and looking into the feasibility.

Is this still valid at version 2.x and above? I can see old support in ML605, ALST4, ZC706 and PicoEVB platforms but I think they’re all using uNOC. Was that bus common between control and data planes? Is it feasible that a PCIe to AXI Xilinx IP block with a PCIe hard IP block would be able to support both control and data planes now, or does it require two connections (or a PCIe for data plane plus ethernet from control plane)?

Is there a better research start point somewhere, or documentation on uNOC and SDP?

Sorry, I can see SDP is used in PicoEVB - I guess that is the best example?

Yes, the picoevb is the most recent most recent example of PCIe.

I recently brought this platform up in the last couple of months. There are some changes that were required that have not been mainlined.

Files · picoevb-fixes · OpenCPI / OpenCPI · GitLab

There’s also a known limitation where the BufferSize when transitioning between CPU and FPGA needs to be set <2048.

thanks Aaron, I’ll have look at it

Is there an issue for this known limitation?

It’s something we’ve seen - having to set the BufferSize to 1920 otherwise to performance drops significantly when moving from CPU to FPGA - but were not aware that it was a “known issue” with the transition.

No, one needs to be created.

This is the closest one I could find; which has reference to buffer sizes on picoevb : Investigate cic_dec unit test failure on PicoEVB (#2682) · Issues · OpenCPI / OpenCPI · GitLab

Created the gitlab issue: PCIe Interconnect PL to PS Unable to Set BufferSize > 2048 (#3794) · Issues · OpenCPI / OpenCPI · GitLab

1 Like