MSI PRO B850-P coreboot port: PCIe and USB descriptors and Phoenix OpenSIL

Introduction

This blog post continues where Part 1 left off. As a quick recap: we successfully brought up bootblock and romstage on the MSI PRO B850-P using coreboot, but ramstage halted waiting for CPU core initialization that depends on OpenSIL. We also mapped all USB, SATA, and PCIe ports during the hardware-topology discovery phase.

Before diving into the new content, there is one important development to mention: all Dasharo code has been rebased from coreboot 24.12 to 25.12. As this project targets a platform that is not yet upstream, working on a fork is unavoidable during active development. However, keeping the fork close to upstream is critical - a large rebase delta would make the eventual upstreaming effort much more painful. By rebasing early and often, we reduce the risk of merge conflicts accumulating and demonstrate that the code can track upstream changes without breaking. This is especially relevant for an AMD desktop board, where getting firmware into a release-ready state faster makes a real difference for users and the community. The rebase had no significant impact on functionality: the platform still boots to the same point as before, and the commit history is cleaner. The Gigabyte MZ33-AR1 also didn’t experience any regression due to rebase and will soon be ready to be released in Dasharo coreboot disctribution.

In this post we will run through the following milestones:

Task 5. Port configuration in coreboot:

  • Milestone a. USB & SATA devicetree descriptors:

    Populate the mainboard devicetree with USB and SATA MPIO entries derived from Task 4 data.

  • Milestone b. PCIe devicetree descriptors:

    Add every root port, lane mask and GPIO reset required for full PCIe slot functionality; verify with enumeration tests.

All the code described in this post is part of the pull request on the Dasharo coreboot fork.

USB devicetree descriptors

With the USB topology map from Part 1 in hand, the next step is to translate it into coreboot’s devicetree format so that coreboot can generate correct ACPI tables describing the physical connector layout to the operating system. This is done using ACPI_PLD (Physical Location of Device) descriptors.

The ACPI_PLD structure lets firmware communicate to the OS exactly where each USB port is physically located on the board: which panel it is on (front, rear, left, right, top, bottom, or unknown for internal), what type of connector it uses, and its position relative to other connectors. Without these descriptors, the OS has no standardized way to match a USB controller port to a physical socket or header. This matters particularly for orientation-sensitive connectors like USB-C, and for tools that report port capabilities.

The SoC-connected USB controllers are configured with physical location descriptors derived from the mapping in Task 4. The four SoC xHCI/USB4 controllers have the following assignments:

  • xhci_0 (SoC USB1, 0000:0e:00.3): rear panel USB-A 5 Gbps port and rear panel USB-C near the antenna connectors
  • xhci_1 (SoC USB2, 0000:0e:00.4): internal Genesys GL3521 hub that drives the four rear panel USB 2.0 ports
  • usb4_xhci_0 (SoC USB3, 0000:10:00.3): rear panel USB-A 10 Gbps port
  • usb4_xhci_1 (SoC USB4, 0000:10:00.4): rear panel USB-C port adjacent to the HDMI output

Each port entry in devicetree.cb specifies the connector type via the UPC_TYPE_* constant and its position on the rear panel via the ACPI_PLD macro. Internal hub ports additionally receive UPC_TYPE_INTERNAL to indicate they are not directly user-accessible. For example, xhci_0 with its rear USB-A and USB-C ports looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
device ref xhci_0 on
    chip drivers/usb/acpi
        device ref xhci_0_root_hub on
            chip drivers/usb/acpi
                register "desc" = ""Rear USB3 Type-A 5G""
                register "type" = "UPC_TYPE_USB3_A"
                register "use_custom_pld" = "true"
                register "custom_pld" = "ACPI_PLD_TYPE_A(BACK, RIGHT, ACPI_PLD_GROUP(4, 1))"
                device ref usb3_port2 on end
            end
            chip drivers/usb/acpi
                register "desc" = ""Rear USB-C""
                register "type" = "UPC_TYPE_C_USB2_SS"
                register "use_custom_pld" = "true"
                register "custom_pld" = "ACPI_PLD_TYPE_C(BACK, RIGHT, ACPI_PLD_GROUP(2, 1))"
                device ref usb3_port3 on end
            end
            chip drivers/usb/acpi
                register "desc" = ""Rear USB3 Type-A 5G""
                register "type" = "UPC_TYPE_USB3_A"
                register "use_custom_pld" = "true"
                register "custom_pld" = "ACPI_PLD_TYPE_A(BACK, RIGHT, ACPI_PLD_GROUP(4, 1))"
                device ref usb2_port2 on end
            end
            chip drivers/usb/acpi
                register "desc" = ""Rear USB-C""
                register "type" = "UPC_TYPE_C_USB2_SS"
                register "use_custom_pld" = "true"
                register "custom_pld" = "ACPI_PLD_TYPE_C(BACK, RIGHT, ACPI_PLD_GROUP(2, 1))"
                device ref usb2_port3 on end
            end
        end
    end
end

The ACPI_PLD_GROUP(group, port) argument lets the OS distinguish ports that share the same physical panel position. The xhci_1 controller is backed by the internal Genesys hub, so its root hub port is simply marked as internal:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
device ref xhci_1 on
    chip drivers/usb/acpi
        device ref xhci_1_root_hub on
            chip drivers/usb/acpi
                register "type" = "UPC_TYPE_INTERNAL"
                device ref usb2_port7 on end
            end
        end
    end
end

The two USB4 xHCI controllers follow the same pattern, with the panel position matching their location on the rear I/O panel. For example, usb4_xhci_1 (the USB-C port next to the HDMI output, on the left side of the rear panel):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
chip drivers/usb/acpi
    device ref usb4_xhci_1_root_hub on
        chip drivers/usb/acpi
            register "desc" = ""Rear USB-C""
            register "type" = "UPC_TYPE_C_USB2_SS"
            register "use_custom_pld" = "true"
            register "custom_pld" = "ACPI_PLD_TYPE_C(BACK, LEFT, ACPI_PLD_GROUP(1, 1))"
            device ref usb3_port1 on end
        end
        chip drivers/usb/acpi
            register "desc" = ""Rear USB-C""
            register "type" = "UPC_TYPE_C_USB2_SS"
            register "use_custom_pld" = "true"
            register "custom_pld" = "ACPI_PLD_TYPE_C(BACK, LEFT, ACPI_PLD_GROUP(1, 1))"
            device ref usb2_port1 on end
        end
    end
end

SATA descriptors and a shift of scope

During the hardware-topology discovery phase we confirmed that Phoenix processors have no integrated SATA controller - all four SATA ports on the MSI PRO B850-P come from the Promontory B850 chipset. Unlike the Turin server platform where SATA lanes pass through the SoC’s MPIO/DXIO block and require SoC-level descriptors, the Phoenix desktop relies entirely on the chipset for SATA.

This means there are no SATA MPIO entries to add to the SoC’s devicetree. Instead, the effort originally budgeted for SATA descriptors shifts to the chipset support milestone in Task 6. Port Phoenix AM5 specific code to OpenSIL:

  • Milestone a. Port Promontory I/O expansion chipset support to OpenSIL

    The Phoenix OpenSIL support only covers mobile Phoenix CPUs. Board designs with desktop CPUs also use the Promontory chipset to provide additional I/O expansions on the board. The goal is to add Promontory 21 initialization to Phoenix OpenSIL, by adding a new IP block to OpenSIL and the necessary Kconfig option to differentiate between mobile and desktop Phoenix CPUs to conditionally include the Promontory 21 IP block initialization.

SATA on the B850-P will therefore be enabled as a side effect of initializing the chipset, rather than through a standalone SoC descriptor change.

This, togetehr with explanations in section USB devicetree descriptors fulfills the Task 5. Port configuration in coreboot - Milestone a. USB & SATA devicetree descriptors.

PCIe devicetree descriptors

Adding PCIe MPIO descriptors follows the same methodology that was used for Gigabyte MZ33-AR1. The hardware topology data gathered in Part 1 maps directly to coreboot’s MPIO_BRIDGE_* entries in devicetree.cb. Each entry describes one PCIe root port: which GPIO drives its reset, which physical lane range it occupies, and its maximum link width.

From the mapping results we have the following four ports to configure on the SoC itself:

Bridge Slot DXIO lanes Width
gpp_bridge_1_1 PCIE_E1 full-length slot 0 - 7 x8
gpp_bridge_1_2 M2_1 M.2 slot 16 - 19 x4
gpp_bridge_2_1 Promontory B850 chipset uplink 20 - 23 x4
gpp_bridge_2_2 M2_2 M.2 slot 24 - 27 x4

PCIE0 drives the first two entries (bridges 1.1 and 1.2) and PCIE1 drives the remaining two (bridges 2.1 and 2.2). This matches what the AMD XIO tool reported during mapping:

  • PCIE_E1 appeared on bridge 0:0:1.1 with SERDES lanes 0-3 (and the other half of the x8 slot occupying lanes 4-7 when a wider card is installed)
  • M2_1 appeared on bridge 0:0:1.2 with SERDES lanes 8-11, which map to DXIO lanes 16-19
  • The B850 chipset appeared on bridge 0:0:2.1 at the first four lanes of PCIE1, corresponding to DXIO lanes 20-23
  • M2_2 appeared on bridge 0:0:2.2 at DXIO lanes 24-27

With the DXIO lane ranges from the topology discovery in part 1 of the post series we could add chip drivers/amd/opensil/mpio blocks in devicetree.cb as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# PCIE_E1: x8 PCIe slot, DXIO lanes 0-7 (P0 segment)
chip drivers/amd/opensil/mpio
    register "type" = "IFTYPE_PCIE"
    register "start_lane" = "0"
    register "end_lane" = "7"
    device ref gpp_bridge_1_1 on end
end

# M2_1: x4 M.2 slot, DXIO lanes 16-19 (G0 segment)
chip drivers/amd/opensil/mpio
    register "type" = "IFTYPE_PCIE"
    register "start_lane" = "16"
    register "end_lane" = "19"
    device ref gpp_bridge_1_2 on end
end

# Promontory B850 chipset: x4 uplink, DXIO lanes 20-23 (G0 segment)
chip drivers/amd/opensil/mpio
    register "type" = "IFTYPE_PCIE"
    register "start_lane" = "20"
    register "end_lane" = "23"
    device ref gpp_bridge_2_1 on end
end

# M2_2: x4 M.2 slot, DXIO lanes 24-27 (G0 segment)
chip drivers/amd/opensil/mpio
    register "type" = "IFTYPE_PCIE"
    register "start_lane" = "24"
    register "end_lane" = "27"
    device ref gpp_bridge_2_2 on end
end

The remaining slots on the board and integrated peripherals (Ethernet, WiFi) are all behind the B850 chipset switch. Configuring them requires chipset-level initialization, which is deferred to Task 6 Milestone a responsible for adding Promontory I/O chipset initialization support.

This fulfills Task 5. Port configuration in coreboot - Milestone b. PCIe devicetree descriptors.

Although proving the PCIe operates properly may require further work in the following milestones:

Task 6. Port Phoenix AM5 specific code to OpenSIL:

  • Milestone c. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (NBIO, SMU, GFX)
  • Milestone e. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (MPIO, CXL)

The especially relevant parts will be NBIO and MPIO.

Integrating Phoenix OpenSIL

With the port descriptors in place the next natural step is to actually run OpenSIL so that the CPU cores can be brought up and PCIe initialization can proceed. AMD published the Phoenix OpenSIL Proof of Concept source on GitHub, similarly to how they did for Turin earlier. The integration follows the same approach used for Turin/Genoa: the OpenSIL tree lives as a git submodule under src/vendorcode/amd/opensil/phoenix_poc/ in the Dasharo coreboot fork.

However, integrating Phoenix OpenSIL turned out to require more work than integrating Genoa. or Turin The Phoenix PoC repository had a couple of rough edges that needed to be smoothed out before even the build would succeed.

Missing Kconfig infrastructure

The first blocker was a build error:

1
util/kconfig/lib/genconfig.py [Errno 2] No such file or directory

OpenSIL uses Kconfig for its own internal configuration, and the phoenix_poc branch was simply missing the genconfig.py utility and associated kconfig scripts. This has been reported as issue #42 in the upstream OpenSIL repository. The fix - adding the missing utility to the tree - is part of the 3mdeb OpenSIL fork used as the submodule source.

Application Processor launch fix

The second issue was a broken AP (Application Processor) launch in the xUSL/CCX subsystem. Secondary CPU cores were not coming up during ramstage initialization. This is the same class of problem that was encountered on Turin and fixed in PR #29 for that platform. The Phoenix PoC needed an analogous fix, which has been applied in the 3mdeb fork.

SIL_CONTEXT and interface differences

Beyond the build fixes, Phoenix OpenSIL differs from Genoa and Turin in a number of interface details. The most notable is the requirement to pass a SIL_CONTEXT structure to certain initialization calls. On Genoa and Turin this context was implicit, but Phoenix makes it an explicit argument. The coreboot-side glue code in src/vendorcode/amd/opensil/phoenix_poc/ - covering ACPI, memory map, romstage, and ramstage hooks - had to be adapted to provide and thread through this context wherever needed.

The romstage integration is relatively slim - it only needs to query the low usable DRAM address from OpenSIL so coreboot can place cbmem. Because cbmem is not available yet at this stage, the SIL_CONTEXT is stack-allocated with a zeroed SilMemBaseAddress:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
uintptr_t opensil_get_low_usable_dram_address(void)
{
    SIL_CONTEXT SilContext = {
        .ApobBaseAddress = CONFIG_PSP_APOB_DRAM_ADDRESS,
        .SilMemBaseAddress = 0 /* cbmem can't be ready now */
    };

    uintptr_t low_usable_dram_addr = xPrfGetLowUsableDramAddress(&SilContext);
    printk(BIOS_DEBUG, "xPrfGetLowUsableDramAddress: 0x%lx\n",
           low_usable_dram_addr);

    return low_usable_dram_addr;
}

In ramstage the context is backed by a cbmem allocation that persists across the three OpenSIL time points. The setup_opensil() function allocates the buffer, queries the required size from OpenSIL, and then configures the individual IP blocks before the first time point runs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
void setup_opensil(void)
{
    SIL_CONTEXT SilContext;
    const size_t mem_req = xSimQueryMemoryRequirements();
    void *buf = cbmem_add(CBMEM_ID_AMD_OPENSIL, mem_req);

    if (!buf)
        die("Could not allocate OpenSIL memory in cbmem!");

    SilContext.ApobBaseAddress = CONFIG_PSP_APOB_DRAM_ADDRESS;
    SilContext.SilMemBaseAddress = (uintptr_t)buf;

    const SIL_STATUS assign_mem_ret =
        xSimAssignMemoryTp1(&SilContext, mem_req);
    SIL_STATUS_report("xSimAssignMemory", assign_mem_ret);

    setup_rc_manager_default(&SilContext);
    configure_usb(&SilContext);
    configure_ccx(&SilContext);
}

configure_usb() reads the devicetree enable/disable state of each USB controller and mirrors it into OpenSIL’s FCHUSB_INPUT_BLK, while configure_ccx() sets up APIC mode, SVM extensions, and hands off the microcode patch pointer (microcode reused from Turin integration). The three time points are then dispatched through a small helper that handles the reset request return codes:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
static void opensil_entry(SIL_TIMEPOINT timepoint)
{
    SIL_STATUS ret;
    SIL_TIMEPOINT tp = (uintptr_t)timepoint;
    SIL_CONTEXT SilContext;
    void *buf = cbmem_find(CBMEM_ID_AMD_OPENSIL);

    if (!buf)
        die("OpenSIL cbmem memory not found!\n");

    SilContext.ApobBaseAddress = CONFIG_PSP_APOB_DRAM_ADDRESS;
    SilContext.SilMemBaseAddress = (uintptr_t)buf;

    switch (tp) {
    case SIL_TP1:
        ret = InitializeAMDSiTp1(&SilContext);
        break;
    case SIL_TP2:
        ret = InitializeAMDSiTp2(&SilContext);
        break;
    case SIL_TP3:
        ret = InitializeAMDSiTp3(&SilContext);
        break;
    default:
        printk(BIOS_ERR, "Unknown openSIL timepoint\n");
        return;
    }
    char opensil_function[16];
    snprintf(opensil_function, sizeof(opensil_function), "InitializeAMDSiTp%d", tp + 1);
    SIL_STATUS_report(opensil_function, ret);
    if (ret == SilResetRequestColdImm || ret == SilResetRequestColdDef) {
        printk(BIOS_INFO, "openSIL requested a cold reset");
        do_cold_reset();
    } else if (ret == SilResetRequestWarmImm || ret == SilResetRequestWarmDef) {
        printk(BIOS_INFO, "openSIL requested a warm reset");
        do_warm_reset();
    }
}

Finally, filter.h defines bitmasks for each OpenSIL subsystem and lets individual modules be enabled independently via Kconfig, which was invaluable during integration to understand which IP blocks were running and where things went wrong:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#define DEBUG_FILTER_APOB   0x00000001UL
#define DEBUG_FILTER_NBIO   0x00000002UL
#define DEBUG_FILTER_CCX    0x00000004UL
#define DEBUG_FILTER_SMU    0x00000008UL
#define DEBUG_FILTER_DF     0x00000010UL
#define DEBUG_FILTER_MPIO   0x00000020UL
#define DEBUG_FILTER_MEM    0x00000040UL
#define DEBUG_FILTER_FCH    0x00000080UL
#define DEBUG_FILTER_RAS    0x00000100UL
#define DEBUG_FILTER_CXL    0x00000200UL
#define DEBUG_FILTER_RCMGR  0x00000800UL
#define DEBUG_FILTER_GFX    0x00001000UL

#define SIL_DEBUG(topic) \
    (CONFIG(OPENSIL_DEBUG_##topic) ? DEBUG_FILTER_##topic : 0)

#define SIL_DEBUG_MODULE_FILTER ( \
    SIL_DEBUG(APOB) | \
    SIL_DEBUG(NBIO) | \
    SIL_DEBUG(CCX) | \
    SIL_DEBUG(SMU) | \
    SIL_DEBUG(DF) | \
    SIL_DEBUG(MPIO) | \
    SIL_DEBUG(MEM) | \
    SIL_DEBUG(FCH) | \
    SIL_DEBUG(RAS) | \
    SIL_DEBUG(CXL) | \
    SIL_DEBUG(RCMGR) | \
    SIL_DEBUG(GFX))

Proof of concept boot log

Despite the differences, the integration reached a working state where OpenSIL actually runs during ramstage. The full coreboot serial log is available here. It shows OpenSIL initializing and CCX (the CPU Complex block responsible for core topology and bring-up) progressing correctly - for example the CCX initialization completes and APs are brought up successfully:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:534:Launching APs
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:536:    System has 1 Socket(s)
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:542:    Socket 0 has 1 Die(s)
...
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:549:    Socket 0 Die 0 has 1 CCD(s)
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:555:    Socket 0 Die 0 CCD 0 has 1 CCX(s)
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:567:    Socket 0 Die 0 CCD 0 CCX 0 has 6 Core(s)
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:588:    Socket 0 Die 0 CCD 0 CCX 0 Core 0 has 2 Thread(s)
[DEBUG]  SIL:xUSL:InitializeCcxAndLaunchAps:599:Launch socket 0 die 0 ccd 0 complex 0 core 0 thread 1
...
SIL:xUSL:InitializeCcxAndLaunchAps:614:Launch success.  Launching next AP.

We can see that SIL:xUSL:InitializeCcxAndLaunchAps:614:Launch success. appears exactly 11 times (12 threas minus BSP thread), which indicates all cores were brought up.

Other blocks such as FCH and memory initialization also run without errors.

The platform halts during PCIe initialization in OpenSIL. This is expected at this stage: Phoenix OpenSIL was originally written for mobile Phoenix CPUs, and the MSI PRO B850-P uses a desktop AM5 processor variant. The mobile and desktop parts share a large portion of the initialization flow, but there are differences in NBIO, CCX configuration, SMU interactions, and MPIO topology. Attempting to run mobile initialization code on a desktop CPU predictably causes problems once the silicon-specific paths diverge.

This is as far as we can go without adapting Phoenix OpenSIL to the desktop AM5 processor variant, but that is the job for the next milestones:

Task 6. Port Phoenix AM5 specific code to OpenSIL:

  • Milestone b. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (CCX, FCH, MEM)

    Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers CCX, FCH and MEM blocks.

  • Milestone c. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (NBIO, SMU, GFX)

    Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers NBIO, SMU and GFX blocks.

  • Milestone d. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (DF, RcMgr, APOB)

    Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers DF, RcMgr and APOB blocks.

  • Milestone e. Cover Phoenix mobile and desktop silicon initialization differences in OpenSIL (MPIO, CXL)

    Most of the code is similar or identical between desktop and mobile parts. However, care must be taken for possible small differences. The goal is to analyze and compare the desktop and mobile differences. This milestone covers MPIO and CXL blocks.

Completing these four milestones should bring OpenSIL to the point where it can successfully complete full CPU and PCIe initialization on the MSI PRO B850-P with a desktop Phoenix processor.

Upstreaming

All of the code developed so far lives on the Dasharo coreboot fork because the development pace and the need for a stable base with Turin support already integrated make it impractical to send every incremental patch directly to upstream coreboot Gerrit. Sending long patch relation chains also occupies the upstream Jenkins CI for extended periods.

Upstreaming is planned once the port reaches a mature, stable enough state, under the following milestones:

Task 9. Upstreaming:

  • Milestone a. Initial patch series to coreboot Gerrit

    Submit SoC changes and mainboard port; pass CI.

  • Milestone b. OpenSIL contribution

    Submit Turin-specific changes back to AMD’s OpenSIL, track acceptance, and update documentation links.

Summary

This post covered three areas of progress for the MSI PRO B850-P coreboot port:

  1. USB physical location descriptors have been added to the devicetree for the four SoC-connected USB controllers, fulfilling Task 5 Milestone a for the USB part. SATA descriptor work has been rerouted to the Promontory chipset support milestone (Task 6a) because all SATA on this board comes from the B850 chipset.

  2. PCIe MPIO descriptors for the SoC root ports have been added, fulfilling Task 5 Milestone b.

  3. Phoenix OpenSIL has been integrated as a vendorcode submodule with build fixes for missing Kconfig infrastructure and a broken AP launch, mirroring the kinds of fixes needed for Turin. The integration is far enough along to show OpenSIL running and reaching CCX initialization successfully, but the platform halts during PCIe initialization due to mobile-vs-desktop silicon differences - the subject of the next set of milestones.

Phoenix OpenSIL is still in the Proof of Concept stage and is not intended for production use - proceed at your own risk. Stay tuned for the next blog post as we work through the mobile-to-desktop adaptation of Phoenix OpenSIL.

Huge kudos to the NLnet Foundation for sponsoring the project.

NLnet

For OEMs & ODMs

If you are an OEM or ODM and see the value in AMD OpenSIL support for your products, our team can help make it a reality. Reach out to us via our contact form or email us at contact<at>3mdeb<dot>com to start the conversation.

Unlock the full potential of your hardware and secure your firmware with the experts at 3mdeb! If you’re looking to boost your product’s performance and protect it from potential security threats, our team is here to help. Schedule a call with us or drop us an email at contact<at>3mdeb<dot>com to start unlocking the hidden benefits of your hardware. And if you want to stay up-to-date on all things firmware security and optimization, be sure to sign up for our newsletter:


Michał Żygowski
Firmware Engineer with networking background. Feels comfortable with low-level development using C/C++ and assembly. Interested in advanced hardware features, security and coreboot. Core developer of coreboot. Maintainer of Braswell SoC, PC Engines, Protectli and Libretrend platforms.