Introduction
This blog post continues where Part 1 left off. As a quick recap: we successfully brought up bootblock and romstage on the MSI PRO B850-P using coreboot, but ramstage halted waiting for CPU core initialization that depends on OpenSIL. We also mapped all USB, SATA, and PCIe ports during the hardware-topology discovery phase.
Before diving into the new content, there is one important development to mention: all Dasharo code has been rebased from coreboot 24.12 to 25.12. As this project targets a platform that is not yet upstream, working on a fork is unavoidable during active development. However, keeping the fork close to upstream is critical - a large rebase delta would make the eventual upstreaming effort much more painful. By rebasing early and often, we reduce the risk of merge conflicts accumulating and demonstrate that the code can track upstream changes without breaking. This is especially relevant for an AMD desktop board, where getting firmware into a release-ready state faster makes a real difference for users and the community. The rebase had no significant impact on functionality: the platform still boots to the same point as before, and the commit history is cleaner. The Gigabyte MZ33-AR1 also didn’t experience any regression due to rebase and will soon be ready to be released in Dasharo coreboot disctribution.
In this post we will run through the following milestones:
Task 5. Port configuration in coreboot:
-
Milestone a. USB & SATA devicetree descriptors:
Populate the mainboard devicetree with USB and SATA MPIO entries derived from Task 4 data.
-
Milestone b. PCIe devicetree descriptors:
Add every root port, lane mask and GPIO reset required for full PCIe slot functionality; verify with enumeration tests.
All the code described in this post is part of the pull request on the Dasharo coreboot fork.
USB devicetree descriptors
With the USB topology map from Part 1 in hand, the next step is to translate
it into coreboot’s devicetree format so that coreboot can generate correct
ACPI tables describing the physical connector layout to the operating system.
This is done using ACPI_PLD (Physical Location of Device) descriptors.
The ACPI_PLD structure lets firmware communicate to the OS exactly where
each USB port is physically located on the board: which panel it is on
(front, rear, left, right, top, bottom, or unknown for internal), what type
of connector it uses, and its position relative to other connectors. Without
these descriptors, the OS has no standardized way to match a USB controller
port to a physical socket or header. This matters particularly for
orientation-sensitive connectors like USB-C, and for tools that report
port capabilities.
The SoC-connected USB controllers are configured with physical location descriptors derived from the mapping in Task 4. The four SoC xHCI/USB4 controllers have the following assignments:
xhci_0(SoC USB1,0000:0e:00.3): rear panel USB-A 5 Gbps port and rear panel USB-C near the antenna connectorsxhci_1(SoC USB2,0000:0e:00.4): internal Genesys GL3521 hub that drives the four rear panel USB 2.0 portsusb4_xhci_0(SoC USB3,0000:10:00.3): rear panel USB-A 10 Gbps portusb4_xhci_1(SoC USB4,0000:10:00.4): rear panel USB-C port adjacent to the HDMI output
Each port entry in devicetree.cb specifies the connector type via the
UPC_TYPE_* constant and its position on the rear panel via the ACPI_PLD
macro. Internal hub ports additionally receive UPC_TYPE_INTERNAL to
indicate they are not directly user-accessible. For example, xhci_0 with
its rear USB-A and USB-C ports looks like this:
|
|
The ACPI_PLD_GROUP(group, port) argument lets the OS distinguish ports that
share the same physical panel position. The xhci_1 controller is backed by
the internal Genesys hub, so its root hub port is simply marked as internal:
|
|
The two USB4 xHCI controllers follow the same pattern, with the panel
position matching their location on the rear I/O panel. For example,
usb4_xhci_1 (the USB-C port next to the HDMI output, on the left side of
the rear panel):
|
|
SATA descriptors and a shift of scope
During the hardware-topology discovery phase we confirmed that Phoenix processors have no integrated SATA controller - all four SATA ports on the MSI PRO B850-P come from the Promontory B850 chipset. Unlike the Turin server platform where SATA lanes pass through the SoC’s MPIO/DXIO block and require SoC-level descriptors, the Phoenix desktop relies entirely on the chipset for SATA.
This means there are no SATA MPIO entries to add to the SoC’s devicetree. Instead, the effort originally budgeted for SATA descriptors shifts to the chipset support milestone in Task 6. Port Phoenix AM5 specific code to OpenSIL:
-
Milestone a. Port Promontory I/O expansion chipset support to OpenSIL
The Phoenix OpenSIL support only covers mobile Phoenix CPUs. Board designs with desktop CPUs also use the Promontory chipset to provide additional I/O expansions on the board. The goal is to add Promontory 21 initialization to Phoenix OpenSIL, by adding a new IP block to OpenSIL and the necessary Kconfig option to differentiate between mobile and desktop Phoenix CPUs to conditionally include the Promontory 21 IP block initialization.
SATA on the B850-P will therefore be enabled as a side effect of initializing the chipset, rather than through a standalone SoC descriptor change.
This, togetehr with explanations in section USB devicetree descriptors fulfills the Task 5. Port configuration in coreboot - Milestone a. USB & SATA devicetree descriptors.
PCIe devicetree descriptors
Adding PCIe MPIO descriptors follows the same methodology that was used for
Gigabyte MZ33-AR1.
The hardware topology data gathered in Part 1 maps directly to coreboot’s
MPIO_BRIDGE_* entries in devicetree.cb. Each entry describes one PCIe
root port: which GPIO drives its reset, which physical lane range it occupies,
and its maximum link width.
From the mapping results we have the following four ports to configure on the SoC itself:
| Bridge | Slot | DXIO lanes | Width |
|---|---|---|---|
gpp_bridge_1_1 |
PCIE_E1 full-length slot |
0 - 7 | x8 |
gpp_bridge_1_2 |
M2_1 M.2 slot |
16 - 19 | x4 |
gpp_bridge_2_1 |
Promontory B850 chipset uplink | 20 - 23 | x4 |
gpp_bridge_2_2 |
M2_2 M.2 slot |
24 - 27 | x4 |
PCIE0 drives the first two entries (bridges 1.1 and 1.2) and PCIE1
drives the remaining two (bridges 2.1 and 2.2). This matches what the AMD
XIO tool reported during mapping:
PCIE_E1appeared on bridge0:0:1.1with SERDES lanes 0-3 (and the other half of the x8 slot occupying lanes 4-7 when a wider card is installed)M2_1appeared on bridge0:0:1.2with SERDES lanes 8-11, which map to DXIO lanes 16-19- The B850 chipset appeared on bridge
0:0:2.1at the first four lanes of PCIE1, corresponding to DXIO lanes 20-23 M2_2appeared on bridge0:0:2.2at DXIO lanes 24-27
With the DXIO lane ranges from the topology discovery in part 1 of the post
series we could add chip drivers/amd/opensil/mpio blocks in devicetree.cb
as follows:
|
|
The remaining slots on the board and integrated peripherals (Ethernet, WiFi) are all behind the B850 chipset switch. Configuring them requires chipset-level initialization, which is deferred to Task 6 Milestone a responsible for adding Promontory I/O chipset initialization support.
This fulfills Task 5. Port configuration in coreboot - Milestone b. PCIe devicetree descriptors.
Although proving the PCIe operates properly may require further work in the following milestones:
Task 6. Port Phoenix AM5 specific code to OpenSIL:
- Milestone c. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (NBIO, SMU, GFX)
- Milestone e. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (MPIO, CXL)
The especially relevant parts will be NBIO and MPIO.
Integrating Phoenix OpenSIL
With the port descriptors in place the next natural step is to actually run
OpenSIL so that the CPU cores can be brought up and PCIe initialization can
proceed. AMD published the Phoenix OpenSIL Proof of Concept source on
GitHub, similarly to
how they did for Turin earlier. The integration follows the same approach used
for Turin/Genoa: the OpenSIL tree lives as a git submodule under
src/vendorcode/amd/opensil/phoenix_poc/ in the Dasharo coreboot fork.
However, integrating Phoenix OpenSIL turned out to require more work than integrating Genoa. or Turin The Phoenix PoC repository had a couple of rough edges that needed to be smoothed out before even the build would succeed.
Missing Kconfig infrastructure
The first blocker was a build error:
|
|
OpenSIL uses Kconfig for its own internal configuration, and the phoenix_poc
branch was simply missing the genconfig.py utility and associated kconfig
scripts. This has been reported as
issue #42 in the upstream
OpenSIL repository. The fix - adding the missing utility to the tree - is
part of the 3mdeb OpenSIL
fork
used as the submodule source.
Application Processor launch fix
The second issue was a broken AP (Application Processor) launch in the
xUSL/CCX subsystem. Secondary CPU cores were not coming up during
ramstage initialization. This is the same class of problem that was
encountered on Turin and fixed in
PR #29 for that platform.
The Phoenix PoC needed an analogous fix, which has been applied in the 3mdeb
fork.
SIL_CONTEXT and interface differences
Beyond the build fixes, Phoenix OpenSIL differs from Genoa and Turin in a
number of interface details. The most notable is the requirement to pass a
SIL_CONTEXT structure to certain initialization calls. On Genoa and Turin
this context was implicit, but Phoenix makes it an explicit argument. The
coreboot-side glue code in src/vendorcode/amd/opensil/phoenix_poc/ -
covering ACPI, memory map, romstage, and ramstage hooks - had to be adapted to
provide and thread through this context wherever needed.
The romstage integration is relatively slim - it only needs to query the low
usable DRAM address from OpenSIL so coreboot can place cbmem. Because cbmem
is not available yet at this stage, the SIL_CONTEXT is stack-allocated with
a zeroed SilMemBaseAddress:
|
|
In ramstage the context is backed by a cbmem allocation that persists across
the three OpenSIL time points. The setup_opensil() function allocates the
buffer, queries the required size from OpenSIL, and then configures the
individual IP blocks before the first time point runs:
|
|
configure_usb() reads the devicetree enable/disable state of each USB
controller and mirrors it into OpenSIL’s FCHUSB_INPUT_BLK, while
configure_ccx() sets up APIC mode, SVM extensions, and hands off the
microcode patch pointer (microcode reused from Turin integration). The three
time points are then dispatched through a small helper that handles the reset
request return codes:
|
|
Finally, filter.h defines bitmasks for each OpenSIL subsystem and lets
individual modules be enabled independently via Kconfig, which was invaluable
during integration to understand which IP blocks were running and where things
went wrong:
|
|
Proof of concept boot log
Despite the differences, the integration reached a working state where OpenSIL actually runs during ramstage. The full coreboot serial log is available here. It shows OpenSIL initializing and CCX (the CPU Complex block responsible for core topology and bring-up) progressing correctly - for example the CCX initialization completes and APs are brought up successfully:
|
|
We can see that SIL:xUSL:InitializeCcxAndLaunchAps:614:Launch success.
appears exactly 11 times (12 threas minus BSP thread), which indicates all
cores were brought up.
Other blocks such as FCH and memory initialization also run without errors.
The platform halts during PCIe initialization in OpenSIL. This is expected at this stage: Phoenix OpenSIL was originally written for mobile Phoenix CPUs, and the MSI PRO B850-P uses a desktop AM5 processor variant. The mobile and desktop parts share a large portion of the initialization flow, but there are differences in NBIO, CCX configuration, SMU interactions, and MPIO topology. Attempting to run mobile initialization code on a desktop CPU predictably causes problems once the silicon-specific paths diverge.
This is as far as we can go without adapting Phoenix OpenSIL to the desktop AM5 processor variant, but that is the job for the next milestones:
Task 6. Port Phoenix AM5 specific code to OpenSIL:
-
Milestone b. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (CCX, FCH, MEM)
Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers CCX, FCH and MEM blocks.
-
Milestone c. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (NBIO, SMU, GFX)
Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers NBIO, SMU and GFX blocks.
-
Milestone d. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (DF, RcMgr, APOB)
Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers DF, RcMgr and APOB blocks.
-
Milestone e. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (MPIO, CXL)
Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers MPIO and CXL blocks.
Completing these four milestones should bring OpenSIL to the point where it can successfully complete full CPU and PCIe initialization on the MSI PRO B850-P with a desktop Phoenix processor.
Upstreaming
All of the code developed so far lives on the Dasharo coreboot fork because the development pace and the need for a stable base with Turin support already integrated make it impractical to send every incremental patch directly to upstream coreboot Gerrit. Sending long patch relation chains also occupies the upstream Jenkins CI for extended periods.
Upstreaming is planned once the port reaches a mature, stable enough state, under the following milestones:
Task 9. Upstreaming:
-
Milestone a. Initial patch series to coreboot Gerrit
Submit SoC changes and mainboard port; pass CI.
-
Milestone b. OpenSIL contribution
Submit Turin-specific changes back to AMD’s OpenSIL, track acceptance, and update documentation links.
Summary
This post covered three areas of progress for the MSI PRO B850-P coreboot port:
-
USB physical location descriptors have been added to the devicetree for the four SoC-connected USB controllers, fulfilling Task 5 Milestone a for the USB part. SATA descriptor work has been rerouted to the Promontory chipset support milestone (Task 6a) because all SATA on this board comes from the B850 chipset.
-
PCIe MPIO descriptors for the SoC root ports have been added, fulfilling Task 5 Milestone b.
-
Phoenix OpenSIL has been integrated as a vendorcode submodule with build fixes for missing Kconfig infrastructure and a broken AP launch, mirroring the kinds of fixes needed for Turin. The integration is far enough along to show OpenSIL running and reaching CCX initialization successfully, but the platform halts during PCIe initialization due to mobile-vs-desktop silicon differences - the subject of the next set of milestones.
Phoenix OpenSIL is still in the Proof of Concept stage and is not intended for production use - proceed at your own risk. Stay tuned for the next blog post as we work through the mobile-to-desktop adaptation of Phoenix OpenSIL.
Huge kudos to the NLnet Foundation for sponsoring the project.

For OEMs & ODMs
If you are an OEM or ODM and see the value in AMD OpenSIL support for your
products, our team can help make it a reality. Reach out to us via our
contact form or email us at
contact<at>3mdeb<dot>com to start the conversation.
Unlock the full potential of your hardware and secure your firmware with the
experts at 3mdeb! If you’re looking to boost your product’s performance and
protect it from potential security threats, our team is here to help. Schedule
a call with
us
or drop us an email at contact<at>3mdeb<dot>com to start unlocking the
hidden benefits of your hardware. And if you want to stay up-to-date on all
things firmware security and optimization, be sure to sign up for our
newsletter: