I had success importing a skybox after making the DDS file using the nvidia texture tools we can select there the format to be a cube texture, the exported DDS was imported correctly in flax, maybe you can give it a try?
The imported CubeTexture does not seem to have “Radiance Convolution” mip filtered textures in it.
For people unfamiliar with this term, in Unity and Unreal the imported CubeTexture has 6 blurred images, which is stored inside Mip channels. Then this textures will be rendered accrodingly to the Object’s roughness (smoother material shows clearer image like mirror, while rougher object will have more blurred reflection)
I will create a different post, regarding this issue because the topic is irrelevant to this post
Yes I agree that cube texture should be handled at the import directly from the engine of course and I noticed also that we have almost no control on the mipmap generation of the texture (unlike UNITY), for example some textures do not look good to be mimap(ed) to the extreme and should only have 2 or 3 levels of mipmaps (specially when you have alpha), I saw also that for anisotropic filtering we have to use texture group and it is at the shader level that we set the function, maybe the “Radiance Convolution” mipmap feature you are referencing to is also part of the shader or could be done into the shader?
Radiance convolution is actually an algorithm to “Blur CubeMap correctly accroding to its roughness” based on BRDF function, so it’s not just applying a Gaussian Blur randomly for example.
However in practice, the algorithm is so slow and not suitable for realtime rendering.
If you’re interested here is a paper about the algorithm written by Brian Karis of Unreal
So to compensate the high-cost of computing this convolution in realtime, the most common technique is to pre-calculate the convolution (blurred) texture and store them inside MipMap filter.
Then using the texture, we can sample it manually or automatically done
by engine to apply how blurred the CubeMap to our desired object.
So it is not ideal for us to compute the convolution inside shader (although we could),
because it will take too much of performance (except if you want to create a still rendered image)
In Unity usually this convolution texture is automatically computed when importing the texture