Guide: use Intel Optane Memory H10 with Rock5B / pcie splitting

The Optane Memory H10 is actually a QLC SSD and an optane memory placed on the single board, each one occupies its pcie3.0 x2 bus. There are no PCIe switch / mux / cache controller on the board, just two components glued together

By default the m.2 m key slot is configured to pcie3.0x4 mode so only one component will be visible. In my case it is the optane memory.

To switch the slot to pcie3.0 x2 +x2 mode, refer to Rockchip_Developer_Guide_PCIe Chapter 2.3.2 and change the dts accordingly. Please note the reset_gpio property for the pcie3x2 node should be set to GPIO1 B7 following board design.

Also you can use this dtb I modified

rk3588-rock-5b.dtb.zip (44.8 KB)


NOTE The Optane H10 is power-hungry, if you encounter problems like power loss please use a DC power supply instead of PD supply!

5 Likes

Can you please share a diff of the pcie3x4 node in rk3588.dtsi or is there more to adjust?

Sorry but I have lost that file:upside_down_face:

I’m looking at using a 512GB Samsung 970EVO Plus SSD.
I can get a 512GB Optane for a similar price.
How does the performance compare ??
Some benchmarks have the Optane a fair bit slower !!

Because 970evo is using 4 lanes and optane is 2 devices with 2 lanes each. Unless you understand why you need optane - use samsung

what I thought,
thanks

I was just wondering about this the other day! Awesome, glad to see it works :smiley:

I’m trying to use MEMPEK1W016GA with Rock 5B (https://www.intel.com/content/www/us/en/products/sku/97544/intel-optane-memory-series-16gb-m-2-80mm-pcie-3-0-20nm-3d-xpoint/specifications.html).

I don’t think it has any NAND Flash but it only appears in lspci, nothing is listed in lsblk.

Any ideas how to fix this? I am using Armbian Jammy.

Is there anyone who could help?

Bumping this again. Please help me make the 16 GB optane module work. It would be ideal for keeping a Linux rootfs on.

For god sake, just buy 256gb ssd, like pm4a1.

不听不听,王八念经,所以这就是开源的魅力所在

Of course I considered this, but optane has advantages that normal SSDs don’t have, like incredibly low latency. Anyway, I don’t think I need to explain why I want to use something that should work out of the box.

Strange because I’m using a very similar drive (an 16GB optane memory) and it does works out of the box. :neutral_face:

You can see the neofetch of this post: Arch Linux ARM bootable using latest BSP

I literally booted from that drive with an OS that is not even officially supported, and that was 3 months ago.

I really don’t know what is the problem with yours that causing the nvme driver not recognizing it as a drive.

Mine has a model number of MEMPEK1J016GA, which has a label that appears to be a retail drive. (It’s the M10 optane drive). Yours appears to be an OEM drive. But there should not be much difference unless the OEM firmware is working weirdly.

1 Like

Thank you!! Can you share what lsblk and lspci shows for you?

I’m not with my board currently but the lspci is included in the screenshot in that post.

lsblk should not be too special and you should see an nvme drive in lsblk.

Could you check dmesg and see what happened?

Is your ssd broken/bricked?
Try ls /dev/nvme*
If it shows nvme0 without nvme0n1p1 it is probably broken.

1 Like

also try to check with the nvme command

I will try this at home, but from what I remember there isn’t anything that starts with nvme in /dev/. It would show up in lsblk too I suppose. But there is nothing there at all.

@gnattu have you ever tried your optane in a USB NVME enclosure? Does any system detect the drive when it’s in one?

In the meantime, I found this: https://community.intel.com/t5/Intel-Optane-Memory/problem-with-intel-optane-memory/m-p/606138/highlight/true#M2974 - can it be related?

Yes I did. I even flashed one image this way. If it is missing even with an enclosure, then probably your drive has some problem.