0

I run a X server on a Windows 7 machine with OpenGL 4.4. From there I ssh -Y to a remote machine where I start an OpenGL application. (for what it matters, the network connection is very fast, I have turned off compression and use arcfour,blowfish-cbc ciphers for speed)

glxgears runs, but not very smoothly. Reports it is doing 6000+ FPS though.

However, matlab fails to use hardware OpenGL rendering. I read the docs and they mention it requires OpenGL version 2.1. When I run glxinfo in the ssh terminal, it tells me:

GLX version: 1.4

OpenGL version string: 1.4 (4.4.0 - Build 10.18.15.4279)

I don't know the technical details of GLX, but does this mean that the OpenGL version supported over SSH is limited to 1.4? I understand that the latest version of GLX is quite old, compared to the progress of OpenGL.

Paul
  • 766
  • 9
  • 28
  • Stackoverflow isn't the best place to look for system setup answer. But there is already good answer for your question: http://unix.stackexchange.com/a/60822/143474 – Pauli Nieminen Oct 26 '16 at 11:08
  • Thanks. That was indeed very helpful and sorry for posting to the wrong exchange. I've posted my follow-up question to [the unixes](http://unix.stackexchange.com/questions/319052/opengl-on-a-remote-machine). – Paul Oct 26 '16 at 11:49

1 Answers1

3

I run a X server on a Windows 7 machine with OpenGL 4.4

The first problem start with this. A X11 server on Windows is just another program running there and ultimately is going to turn X11 commands into Win32 GDI calls. X11 itself does not "know" OpenGL, that's why there's the GLX extension. And GLX is an interesting beast and the X11 servers for windows all implement only a very basic baseline of OpenGL commands to support the essentials.

But that's only half of your problem…

From there I ssh -Y to a remote machine where I start an OpenGL application.

Doing this kind of thing always invokes indirect rendering where all commands have to be sent as a GLX opcode command stream. And unfortunately (for you) GLX opcodes have been specified only up to OpenGL-2.1, but full GLX support is mandatory only for up to OpenGL-1.4. OpenGL-1.5 introduced vertex buffer objects, which add quite a lot of complications for an indirect rendering contexts, so GLX may implementations opt not to support it for indirect rendering.

For Linux at least the proprietary NVidia drivers and client libraries have full indirect OpenGL-2.1 support. But the X11 server you're running on Windows, and likely the client library don't.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • That is very interesting. So actually GLX is in itself not limited to 1.4, but nobody interested in implementing the rest for Windows. Just thinking a little bit further here: NVidia might as well implement this for Windows without having to go through Win32 GDI? But I guess there is near zero demand for this. – Paul Oct 27 '16 at 13:47
  • @Paul: Well if NVidia had any intentions to implement this for Win32 then they'd have to patch the whole X11 server. The X11 servers normally used on Windows don't allow for loading of DDX modules, so there goes that port of entry. And when it comes to GLX, well, somebody would have to bother to submit a standardization proposal for something like GLX-3 to Khronos first, because it could be properly implemented. – datenwolf Oct 27 '16 at 13:58
  • @Paul: Truth to be told, most of the time when people are using GL-3 or later, the usage pattern goes beyond what can be implemented with high performance through indirect GLX. Things like persistently mapped buffers, fences, sparse textures and so on would create a lot of protocol overhead. The whole idea behind OpenGL-3 onward was to reduce to total amount of drawing calls and state switches done, by bringing the GPU and CPU closer together. Minimizing the amount of draw calls actually would cater toward indirect rendering. The rest, not so much. – datenwolf Oct 27 '16 at 14:07
  • @Paul: My recommendation: Use windowless off-screen rendering and only transfer the final picture. Depending on the amount of data you upload to the GPU per frame this might even result in less bandwidth required than indirect GLX calls. Here's a nice article by NVidia on how to do off-screen GPU rendering without an X server (a feature that was long missing from their drivers): https://devblogs.nvidia.com/parallelforall/egl-eye-opengl-visualization-without-x-server/ – datenwolf Oct 27 '16 at 14:09