User’s Guide for VirtualGL 3.1.1
Intended audience: System Administrators, Graphics Programmers, Researchers, and others with knowledge of Linux or Unix operating systems, OpenGL, GLX or EGL, and the X Window System.
This document and all associated illustrations are licensed under the Creative Commons Attribution 2.5 License. Any works that contain material derived from this document must cite The VirtualGL Project as the source of the material and list the current URL for the VirtualGL web site.
The official VirtualGL binaries contain libjpeg-turbo, which is based in part on the work of the Independent JPEG Group.
The VirtualGL server components include software developed by the FLTK Project and distributed under the terms of the FLTK License.
VirtualGL is licensed under the wxWindows Library License, v3.1, a derivative of the GNU Lesser General Public License (LGPL), v2.1.
This document assumes that VirtualGL will be installed in the default directory (/opt/VirtualGL). If your installation of VirtualGL resides in a different directory, then adjust the instructions accordingly.
EGL_EXT_platform_device
extension, so it accesses a GPU
through an associated EGL device without the need for a 3D X server.
The EGL back end emulates a subset of the GLX API using the EGL API.
Since the EGL API does not support multi-buffering (double buffering or
quad-buffered stereo) with off-screen surfaces, the EGL back end
emulates multi-buffering using OpenGL renderbuffer objects (RBOs.) As
of this writing, the EGL back end does not yet support all of the GLX
extensions and esoteric OpenGL features that the GLX back end supports.
glXSwapBuffers()
while rendering to the
back buffer or if the application calls glFinish()
,
glXWaitGL()
, or (optionally) glFlush()
while
rendering to the front buffer. If the 3D application is using the
EGL/X11 API, then VirtualGL reads back and transports a rendered frame
if the application calls eglSwapBuffers()
.
VirtualGL is an open source toolkit that gives any Linux or Unix remote display software the ability to run OpenGL applications with full hardware acceleration. Some remote display software cannot be used with OpenGL applications at all. Other remote display software forces OpenGL applications to use a slow, software-only renderer, to the detriment of performance as well as compatibility. The traditional method of displaying OpenGL applications to an X server on a different machine (indirect rendering) supports hardware acceleration, but this approach requires that all of the OpenGL commands and 3D data be sent over the network to be rendered. That is not a tenable proposition unless the 3D data is relatively small and static, unless the network is very fast, and unless the OpenGL application is specifically tuned for a remote X Window System (“X”) environment.
With VirtualGL, the OpenGL commands and 3D data are instead redirected to a GPU in the application server, and only the rendered frames are sent over the network. VirtualGL thus virtualizes GPU hardware, allowing it to be co-located in the “cold room” with compute and storage resources. VirtualGL also allows GPUs to be shared among multiple users, and it provides “workstation-like” levels of performance on 100-megabit and faster networks. This makes it possible for large, noisy, hot 3D workstations to be replaced with laptops or even thinner clients. More importantly, however, VirtualGL eliminates the workstation and the network as barriers to data size. Users can now visualize huge amounts of data in real time without needing to copy any of the data over the network or sit in front of the machine that is rendering the data.
Normally, a Un*x OpenGL application sends all of its graphics rendering commands and data, both 2D and 3D, to an X server, which may be located across the network from the application server. VirtualGL employs a technique called “split rendering” to redirect the 3D commands and data from the OpenGL application to a GPU in the application server. VGL accomplishes this by pre-loading a dynamic shared object (DSO), the VirtualGL Faker, into the OpenGL application at run time. The VirtualGL Faker intercepts and modifies certain GLX, EGL, OpenGL, X11, and XCB function calls in order to divert OpenGL rendering from the 3D application’s windows into corresponding off-screen buffers, which VGL creates in GPU memory on the application server. When the 3D application swaps the OpenGL drawing buffers or flushes the OpenGL command buffer to indicate that it has finished rendering a frame, VirtualGL reads back the rendered frame from the off-screen buffer and transports it (which normally involves delivering the frame to the 2D X server and compositing it into the 3D application’s window.)
The beauty of this approach is its non-intrusiveness. VirtualGL monitors a few X11 commands and events to determine when windows have been resized, etc., but it does not interfere in any way with the delivery of X11 2D drawing commands to the X server. For the most part, VGL does not interfere with the delivery of OpenGL commands to the GPU, either. VGL merely forces the OpenGL commands to be delivered to a GPU in the application server (through the 3D X server or EGL device attached to the GPU) rather than to the X server to which the 2D drawing commands are delivered (the 2D X server.) Once the OpenGL rendering has been redirected to an off-screen buffer, everything (including esoteric OpenGL extensions, fragment/vertex shaders, etc.) should “just work.” If an OpenGL application runs correctly when accessing a 3D server/workstation locally, then the same application should run correctly with VirtualGL when accessing the same machine remotely.
VirtualGL has two built-in “image transports” that can be used to deliver rendered frames to the 2D X server:
XPutImage()
or similar X11 commands. This
is most useful in conjunction with an X proxy, which can be one of any
number of Un*x remote display applications, such as VNC. When using the
X11 Transport, VirtualGL does not normally perform any image compression
or encoding itself. It instead relies on an X proxy to encode the
frames and deliver them to the client(s). Since the use of an X proxy
eliminates the need to send X11 commands over the network, this is the
recommended method for using VirtualGL over high-latency or
low-bandwidth networks.
The XV Transport, described in Chapter 10, is a variant of the X11 Transport.
VirtualGL also provides an API that can be used to develop custom image transport plugins.
Server (x86) | Server (x86-64, AArch64) | Client (if using the VGL Transport) | |
---|---|---|---|
Recommended CPU |
|
Dual processors or dual cores recommended | For optimal performance, the CPU should support SSE2 extensions. |
Graphics | AMD or nVidia GPU
|
Any graphics adapter with decent 2D performance
|
|
O/S | VirtualGL should work with a variety of Linux distributions, FreeBSD, and Solaris, but currently-supported versions of Red Hat Enterprise Linux and its derivatives, Ubuntu LTS, and SuSE Linux Enterprise tend to receive the most attention from the VirtualGL community. | ||
Other Software | X server configured to export True Color (24-bit or 32-bit) visuals |
Client (if using the VGL Transport) | |
---|---|
CPU | 64-bit Intel or Apple silicon required |
O/S | OS X/macOS 10.9 “Mavericks” or later (Intel); macOS 11 “Big Sur” or later (Apple silicon) |
Other Software | XQuartz 2.8.0 or later |
Client (if using the VGL Transport) | |
---|---|
Recommended CPU | For optimal performance, the CPU should support SSE2 extensions. |
Graphics | Any graphics adapter with decent 2D performance |
Other Software |
|
The client requirements do not apply to anaglyphic stereo. See Chapter 16 for more details.
Server (GLX back end) | Server (EGL back end) | Client (VGL Transport required) | |
---|---|---|---|
Linux/Unix |
|
No additional requirements |
|
Mac/x86 | N/A | N/A | GPU that supports stereo (example: nVidia Quadro) |
Windows | N/A | N/A | This version of VirtualGL does not support quad-buffered stereo with Windows clients. |
VirtualGL must be installed on any machine that will act as a VirtualGL server or as a client for the VGL Transport. It is not necessary to install VirtualGL on the client if using VNC or another type of X proxy.
If you are installing VirtualGL onto a fresh server, and you also intend to install the nVidia proprietary drivers, install VirtualGL prior to the nVidia drivers. Otherwise, installing VirtualGL may trigger an installation of Mesa, which can modify the libGL symlinks that the nVidia drivers created.
If you want to run both 32-bit and 64-bit OpenGL applications with VirtualGL on 64-bit x86 Linux systems, then you will need to install both VirtualGL-3.1.1.x86_64.rpm and VirtualGL-3.1.1.i386.rpm, or both virtualgl_3.1.1_amd64.deb and virtualgl32_3.1.1_amd64.deb. (virtualgl32_3.1.1_amd64.deb is a supplementary package that contains only the 32-bit server components.)
rpm -e VirtualGL --allmatches yum install VirtualGL*.rpm
rpm -e VirtualGL --allmatches dnf install VirtualGL*.rpm
rpm -e VirtualGL --allmatches yast2 --install VirtualGL*.rpm
rpm -e VirtualGL --allmatches rpm -i VirtualGL*.rpm
dpkg -i virtualgl*.deb apt install -f
Use Cygwin Setup to install the VirtualGL package.
If you are using a platform for which there is not a pre-built VirtualGL
binary package available, then download the VirtualGL source tarball
(VirtualGL-3.1.1.tar.gz) from the
Releases
area of the
VirtualGL
GitHub project page, uncompress it,
cd VirtualGL-3.1.1
, and read the contents of
BUILDING.md for further instructions
on how to build and install VirtualGL from source.
As root, issue one of the following commands:
rpm -e VirtualGL
You may need to add --all-matches
to the RPM command line if you have installed both the 32-bit and 64-bit VirtualGL RPMs.
dpkg -r virtualglIf you have also installed the 32-bit supplementary package:
dpkg -r virtualgl32
Open the Uninstall VirtualGL application, located in the VirtualGL Applications folder. You can also open a terminal and execute:
sudo /opt/VirtualGL/bin/uninstall
Use Cygwin Setup to uninstall the VirtualGL package.
Before configuring VirtualGL, you should first ensure that:
1. The appropriate GPU drivers have been installed on the machine. With few exceptions, you should install the drivers supplied by your GPU vendor rather than the drivers supplied by your O/S distribution. See Section 4.1.
If you intend to configure the machine for use with the GLX back end, then you should also ensure that:
1. The 3D X server has been configured to use the GPU drivers you installed above.
2. The machine has an appropriate display manager (such as GDM, KDM, or LightDM) installed and has been configured to start the display manager and 3D X server at boot time. This is the default with most modern Linux and Unix distributions.
On Wayland-enabled Linux machines running GDM, configuring the machine for use with the GLX back end will disable the ability to log in locally with a Wayland session. In general, logging in locally once the machine has been configured for use with the GLX back end is discouraged, as this could disrupt VirtualGL’s connection to the 3D X server and thus cause OpenGL applications running with VirtualGL to abort or freeze.
3. Accelerated OpenGL is working properly in the 3D X server.
glxinfo
.
More specific instructions are unfortunately outside of the scope of this guide, since they will vary from system to system.
VirtualGL requires access to a GPU in the application server so that it can create off-screen buffers and redirect the 3D rendering from X windows into these buffers. When using the GLX back end, accessing a GPU requires going through an X server attached to that GPU (the 3D X server), so the only way to share the application server’s GPU(s) among multiple users is to grant those users access to the 3D X server.
It is important to understand the security risks associated with this.
Once a user has access to the 3D X server, there is nothing that would
prevent the user from logging keystrokes or reading back images from
that X server. Using xauth
, one can obtain
“untrusted” X authentication keys that prevent such
exploits, but unfortunately, those untrusted keys also disallow access
to the 3D hardware. Thus, it is necessary to grant full, trusted access
to the 3D X server for any users that will need to use the GLX back end.
Unless you fully trust the users to whom you are granting this access,
then you should avoid logging in locally to the 3D X server
(particularly as root) unless absolutely necessary (logging in locally
to the 3D X server is discouraged anyhow, for reasons explained in the
previous section.)
This section will explain how to configure a VirtualGL server such that selected users can use the GLX back end, even if the server is sitting at the login prompt.
init 3
service lightdm stop
/usr/local/etc/rc.d/gdm stop
svcadm disable gdm
/opt/VirtualGL/bin/vglserver_config
Configure server for use with VirtualGL (GLX + EGL back ends)
.)
Restrict 3D X server access to vglusers group (recommended)? [Y/n]
vglusers
group can use the GLX back end
(the configuration script will create the vglusers
group if
it doesn’t already exist.) This is the most secure option, since
it prevents any users outside of the vglusers
group from
accessing (and thus exploiting) the 3D X server.
XTEST
extension (see below.)
Restrict framebuffer device access to vglusers group (recommended)? [Y/n]
vglusers
group can run OpenGL
applications on the VirtualGL server (the configuration script will
create the vglusers
group if it doesn’t already
exist.) This limits the possibility that an unauthorized user could
snoop the 3D framebuffer device(s) and thus see (or alter) the output of
a 3D application that is being used with VirtualGL.
If you are using a recent release of GDM, then the gdm
account must be added to the vglusers
group.
vglusers
group to log in locally to this server and run
OpenGL applications, then this option must be selected.
Disable XTEST extension (recommended)? [Y/n]
XTEST
will not prevent a user from logging
keystrokes or reading images from the 3D X server, but if a user has
access to the 3D X server, disabling XTEST
will prevent
them from inserting keystrokes or mouse events and thus hijacking local
X sessions on that X server.
If you are using GDM 2.14 through 2.20, it will be necessary to run gdmsetup
and manually add an argument of -tst
to the X server command line to disable XTEST
for the first time. After this, vglserver_config
should be able to disable and enable XTEST
properly.
GDM 2.22 and later no longer provide a means of editing the X server command line, so disabling XTEST
will not work. The only known alternative as of this writing is to use a different display manager.
x0vncserver
both require XTEST
, so
if you need to attach a VNC server to the 3D X server, then it is
necessary to answer “No” (and thus leave XTEST
enabled.)
vglusers
group, then edit
/etc/group and add root
to the vglusers
group. If you choose, you can also add
additional users to the group at this time. Note that any user you add
to vglusers
must log out and back in again before their new
group permissions will take effect.
init 5
service lightdm start
/usr/local/etc/rc.d/gdm start
svcadm enable gdm
To verify that the application server is ready to be used with the GLX back end, log out of the server, log back into the server using SSH, and execute the following commands in the SSH session:
vglusers
xauth merge /etc/opt/VirtualGL/vgl_xauth_key xdpyinfo -display :0 /opt/VirtualGL/bin/glxinfo -display :0 -c
xdpyinfo -display :0 /opt/VirtualGL/bin/glxinfo -display :0 -c
Both commands should output a list of visuals and should complete with
no errors. If you chose to disable the XTEST
extension,
then check the output of the xdpyinfo
command to verify
that XTEST
does not show up in the list of extensions.
You should also examine the output of the glxinfo
command
to ensure that at least one of the visuals is 24-bit or 32-bit TrueColor
and has Pbuffer support (the latter is indicated by a P
in
the last column.) Example:
visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav drw id dep cl sp sz l ci b ro r g b a F bf th cl r g b a ns b eat typ ------------------------------------------------------------------------------ 0x151 24 tc 0 32 0 r y . 8 8 8 0 . 4 24 8 16 16 16 16 0 0 None PXW
If none of the visuals has Pbuffer support, then this is most likely
because there is no 3D acceleration, which is most likely because the
correct GPU drivers are not installed (or are misconfigured.) Lack of
3D acceleration is also typically indicated by the word
Mesa
in the client GLX vendor string and/or the OpenGL
vendor string, and the words Software Rasterizer
in
the OpenGL renderer string.
When using the EGL back end, the only way to share the application server’s GPU(s) among multiple users is to grant those users access to the device(s) associated with the GPU(s).
This section will explain how to configure a VirtualGL server such that selected users can use the EGL back end.
/opt/VirtualGL/bin/vglserver_config
Configure server for use with VirtualGL (EGL back end only)
.)
Restrict framebuffer device access to vglusers group (recommended)? [Y/n]
vglusers
group can run OpenGL
applications on the VirtualGL server (the configuration script will
create the vglusers
group if it doesn’t already
exist.) This limits the possibility that an unauthorized user could
snoop the 3D framebuffer device(s) and thus see (or alter) the output of
a 3D application that is being used with VirtualGL.
vglusers
group to log in locally to this server and run
OpenGL applications, then this option must be selected.
vglusers
group, then edit
/etc/group and add root
to the vglusers
group. If you choose, you can also add
additional users to the group at this time. Note that any user you add
to vglusers
must log out and back in again before their new
group permissions will take effect.
To verify that the application server is ready to be used with the EGL back end, log out of the server, log back into the server using SSH, and execute the following command in the SSH session:
/opt/VirtualGL/bin/eglinfo egl0
This command should output a list of EGL configs and should complete with no errors.
VirtualGL can redirect the OpenGL commands from a 3D application to any
GPU in the VirtualGL server. In order for this to work with the GLX
back end, however, all of the GPUs must be attached to different screens
on the same X server or to different X servers. Attaching them to
different screens is the easiest and most common approach, and this
allows a specific GPU to be selected by setting VGL_DISPLAY
to (or invoking vglrun -d
with) :0.0
,
:0.1
, :0.2
, etc. If the GPUs are attached to
different X servers, then a specific GPU can be selected by setting
VGL_DISPLAY
to (or invoking vglrun -d
with) :0.0
, :1.0
, :2.0
, etc.
Setting VGL_DISPLAY
to (or invoking
vglrun -d
with) a DRI device path
(/dev/dri/card0
, /dev/dri/card1
,
/dev/dri/card2
, etc.) or an EGL device ID
(egl0
, egl1
, egl2
, etc.) enables
the EGL back end and selects the specified GPU. See Section
19.1 for more details.
If you intend to use the VGL Transport, then the application
server’s SSH daemon should have the X11Forwarding
option enabled and the UseLogin
option disabled. This is
configured in sshd_config, which is
usually located under /etc/ssh.
You can use the vglserver_config
script to restore the
security settings that were in place before VirtualGL was installed.
Option 2
(Unconfigure server for use with VirtualGL (GLX + EGL back ends)
)
will remove any shared access to the 3D X server and thus prevent
VirtualGL from accessing a GPU in that manner. Additionally, this
option will re-enable the XTEST
extension on the 3D X
server and will restore the framebuffer device permissions to their
default (by default, only root or the user that is currently logged into
the application server locally can access the framebuffer devices.)
After selecting Option 2, you must restart the display manager before the changes will take effect.
Option 4
(Unconfigure server for use with VirtualGL (EGL back end only)
)
will restore the framebuffer device permissions to their default.
Unconfiguring the server does not remove the vglusers
group.
The VirtualGL Client can take advantage of the MIT-SHM
extension in Cygwin/X to accelerate the compositing of rendered frames
into the 3D application’s windows. This can significantly improve
the end-to-end performance of VirtualGL when using the VGL Transport
over a local-area network.
To enable MIT-SHM
in Cygwin/X:
cygserver-config
Do you want to install cygserver as service?
net start cygserver
xdpyinfo
and verify that MIT-SHM
appears
in the list of X extensions
This mode is recommended for use only on secure local-area networks. The X11 traffic is encrypted, but the VGL Transport is left unencrypted.
/opt/VirtualGL/bin/vglconnect user@serverReplace
user
with your username on the VirtualGL
server and server
with the hostname or IP address
of that server.
/opt/VirtualGL/bin/vglrun [vglrun options] 3D-application-executable-or-script [arguments]Consult Chapter 19 for more information on
vglrun
command-line options.
Both the VGL Transport and the X11 traffic are tunneled through SSH when using this mode, and thus it provides a completely secure solution. It is also useful when either the client or the VirtualGL server is behind a restrictive firewall and only SSH connections are allowed through.
The procedure for this mode is identical to the procedure for the
VGL Transport with X11
forwarding, except that you should pass a
-s
argument to vglconnect
when connecting to
the server:
/opt/VirtualGL/bin/vglconnect -s user@server
vglconnect
will make two SSH connections into the server,
the first to find an open port on the server and the second to create
the SSH tunnel for the VGL Transport and open the secure shell. Because
of Cygwin limitations, when connecting from a Windows client, it will be
necessary to enter your SSH password twice unless you are using an SSH
agent to enable password-less logins.
vglconnect -s
can be used to create multi-layered SSH
tunnels. For instance, if the VirtualGL server is not directly
accessible from the Internet, then you can run
vglconnect -s
on the client to connect to an SSH
gateway server, then you can run vglconnect -s
again
on the gateway server to connect to the VirtualGL server (application
server.) Both the X11 traffic and the VGL Transport will be forwarded
from the VirtualGL server through the gateway and to the client.
The VirtualGL Client application (vglclient
) receives
encoded and/or compressed frames on a dedicated TCP socket, decodes
and/or decompresses the frames, and draws the frames into the
appropriate X window. The vglconnect
script wraps both
vglclient
and SSH to greatly simplify the process of
creating VGL Transport connections.
vglconnect
invokes vglclient
with an argument
of -detach
, which causes the VirtualGL Client to completely
detach from the console and run as a background daemon. It will remain
running silently in the background, accepting VGL Transport connections
for the X server on which it was started, until that X server is reset
or until the VirtualGL Client process is explicitly killed. Logging out
of the X server will reset the X server and thus kill all VirtualGL
Client instances that are attached to it. You can also explicitly kill
all instances of the VirtualGL Client running under your user account by
invoking
vglclient -kill
(vglclient
is installed in
/opt/VirtualGL/bin by default.)
vglconnect
instructs the VirtualGL Client to redirect all
of its console output to a log file named
~/.vgl/vglconnect-hostname-display.log,
where hostname is the name of
the computer on which vglconnect
was invoked and
display is the display name
of the X server on which the VirtualGL Client was started (read from the
DISPLAY
environment or passed to vglconnect
using the -display
argument.) In the event that something
goes wrong, this log file is the first place to check.
When the VirtualGL Client successfully starts on a given X server, it
stores its listener port number in a root window property on the X
server. If other VirtualGL Client instances attempt to start on the
same X server, they read the X window property, determine that another
VirtualGL Client instance is already running, and exit to allow the
first instance to retain control. The VirtualGL Client will clean up
the X property under most circumstances, even if it is explicitly
killed. However, under rare circumstances (if sent a SIGKILL signal,
for instance), a VirtualGL Client instance may exit uncleanly and leave
the X property set. In these cases, it may be necessary to add an
argument of -force
to vglconnect
the next time
you use it. This tells vglconnect
to start a new VirtualGL
Client instance, regardless of whether the VirtualGL Client thinks that
there is already an instance running on this X server. Alternately, you
can simply reset the X server to clear the orphaned X window property.
To retain compatibility with previous versions of VirtualGL, the first VirtualGL Client instance on a given machine will attempt to listen on port 4242. If it fails to obtain that port, because another application or another VirtualGL Client instance is already using it, then the VirtualGL Client will try to obtain a free port in the range of 4200-4299. Failing that, it will request a free port from the operating system.
In a nutshell: if you only ever plan to run one X server at a time on the client, which means that you’ll only ever need one instance of the VirtualGL Client at a time, then it is sufficient to open inbound port 4242 in the client’s firewall. If you plan to run multiple X servers on the client, which means that you will need to run multiple VirtualGL Client instances, then you may wish to open ports 4200-4299. Similarly, if you are running the VirtualGL Client on a multi-user X proxy server that has a firewall, then you may wish to open ports 4200-4299 in the server’s firewall. Opening ports 4200-4299 will accommodate up to 100 separate VirtualGL Client instances. More instances than that cannot be accommodated on a firewalled machine, unless the firewall is able to create rules based on application executables instead of listening ports.
Note that it is not necessary to open any inbound ports in the firewall to use the VGL Transport with SSH Tunneling.
The VGL Transport is a good solution for using VirtualGL over a fast network. However, the VGL Transport is not generally suitable for high-latency or low-bandwidth networks, due to its reliance on the X11 protocol to send the non-OpenGL elements of the 3D application’s GUI. The VGL Transport also requires an X server to be running on the client, which makes it generally more difficult to deploy and use on Windows clients. VirtualGL can be used with an X proxy to overcome these limitations. An X proxy acts as a virtual X server, receiving X11 commands from the 3D application (and from VirtualGL), rendering the X11 commands into images, compressing the resulting images, and sending the compressed images over the network to a client or clients. X proxies perform well on all types of networks, including high-latency and low-bandwidth networks. They often provide rudimentary collaboration capabilities, allowing multiple clients to simultaneously view the same X session and pass around control of the keyboard and mouse. X proxies are also stateless, meaning that the client can disconnect and reconnect at will from any machine on the network, and the 3D application will remain running on the server.
Since VirtualGL is sending rendered frames to the X proxy at a very fast rate, the proxy must be able to compress the frames very quickly in order to keep up. Unfortunately, however, most X proxies can’t. They simply aren’t designed to compress, with any degree of performance, the large and complex images generated by 3D applications. Therefore, The VirtualGL Project provides an optimized X proxy called TurboVNC, a high-speed VNC (Virtual Network Computing) variant that is designed specifically to achieve high levels of performance with VirtualGL. More information about TurboVNC, including instructions for using it with VirtualGL, can be found in the TurboVNC User’s Guide.
Many other X proxy solutions work well with VirtualGL, and some of these solutions provide compelling features (seamless windows, for instance), but none of these X proxies matches the performance of TurboVNC, as of this writing.
The most common (and optimal) way to use VirtualGL with an X proxy is to set up both on the same server. This allows VirtualGL to send rendered frames to the X proxy through shared memory rather than over a network.
With this configuration, you can usually invoke
/opt/VirtualGL/bin/vglrun [vglrun options] 3D-application-executable-or-script [arguments]
from a terminal inside of the X proxy session, and it will “just
work.” VirtualGL reads the value of the DISPLAY
environment variable to determine whether to enable the X11 Transport by
default. If DISPLAY
begins with a colon (:
)
or with unix:
, then VirtualGL will assume that the 2D X
server is on the same machine and will enable the X11 Transport as the
default. In some cases, however, the DISPLAY
environment
variable in the X proxy session may not begin with a colon or
unix:
. In these cases, it is necessary to manually enable
the X11 Transport by setting the VGL_COMPRESS
environment
variable to proxy
or by passing an argument of
-c proxy
to vglrun
.
If the X proxy and VirtualGL are running on different servers, then it is desirable to use the VGL Transport to send rendered frames from the VirtualGL server to the X proxy. It is also desirable to disable image compression in the VGL Transport. Otherwise, the frames would have to be compressed by the VirtualGL server, decompressed by the VirtualGL Client, then recompressed by the X proxy, which is a waste of CPU resources. However, sending images uncompressed over a network requires a fast network (generally, Gigabit Ethernet or faster), so there needs to be a fast link between the VirtualGL server and the X proxy server for this procedure to perform well.
The procedure for using the VGL Transport to display 3D applications from a VirtualGL server to an X proxy on a different machine is the same as the procedure for using the VGL Transport to display 3D applications from a VirtualGL server to a client-side 2D X server, with the following exceptions:
VGL_COMPRESS
environment
variable to rgb
or passing an argument of
-c rgb
to vglrun
when launching
VirtualGL. Otherwise, VirtualGL will detect that the 2D X server is on a
different machine, and it will automatically try to enable JPEG
compression.
The X Video extension allows applications to pre-encode or pre-compress images and send them through the X server to the graphics adapter, which presumably has on-board video decoding capabilities. This approach greatly reduces the CPU resources used by the X server, which can be beneficial if the X server is running on a different machine than the application.
In the case of VirtualGL, what this means is that the client no longer has to decode or decompress rendered frames from the 3D application. It can simply pass the frames along to the graphics adapter for decoding.
VirtualGL supports the X Video extension in two ways:
Setting the VGL_COMPRESS
environment variable to
yuv
or passing an argument of -c yuv
to
vglrun
enables the VGL Transport with YUV encoding. When
this mode is enabled, VirtualGL encodes rendered frames as YUV420P (a
form of YUV encoding that uses 4X chrominance subsampling and separates
Y, U, and V components into separate image planes) instead of RGB or
JPEG. The YUV420P-encoded frames are sent to the VirtualGL Client,
which draws them to the 2D X server using the X Video extension.
On a per-frame basis, YUV encoding uses about half the server CPU time as JPEG compression and only slightly more server CPU time than RGB encoding. On a per-frame basis, YUV encoding uses about 1/3 the client CPU time as JPEG compression and about half the client CPU time as RGB encoding. YUV encoding also uses about half the network bandwidth (per frame) as RGB.
However, since YUV encoding uses 4X chrominance subsampling, the encoded frames may contain some visible artifacts. In particular, narrow, aliased lines and other sharp features may appear “soft”.
Setting the VGL_COMPRESS
environment variable to
xv
or passing an argument of -c xv
to
vglrun
enables the XV Transport. The XV Transport is a
special flavor of the X11 Transport that encodes rendered frames as
YUV420P and draws them directly to the 2D X server using the X Video
extension. This is mainly useful in conjunction with X proxies that
support the X Video extension. The idea is that, if the X proxy is
going to have to transcode the frame into YUV anyhow, VirtualGL may be
faster at doing this, since it has a SIMD-accelerated YUV encoder.
VirtualGL 2.2 (and later) includes an API that allows you to write your own image transports. Thus, you can use VirtualGL for doing split rendering and framebuffer readback but then use your own library for delivering the rendered frames to the client.
When the VGL_TRANSPORT
environment variable (or the
-trans
option to vglrun
) is set to
{t}
, then VirtualGL will look for a DSO (dynamic
shared object) with the name
libvgltrans_{t}.so in the
dynamic linker path and will attempt to access a set of API functions
from this library. The functions that the plugin library must export
are defined in
/opt/VirtualGL/include/rrtransport.h,
and an example of their usage can be found in
server/testplugin.cpp and
server/testplugin2.cpp in the
VirtualGL source distribution. The former wraps the VGL Transport as an
image transport plugin, and the latter does the same for the X11
Transport.
vglrun
can be used to launch either binary executables or
shell scripts, but there are a few things to keep in mind when launching
a shell script with vglrun
. When you launch a shell script
with vglrun
, the VirtualGL Faker
(libvglfaker.so) and
dlopen()
interposer
(libdlfaker.so) will be preloaded into
every executable that the script launches. Normally this is innocuous,
but if the script calls any executables that have the setuid and/or
setgid permission bits set, then the dynamic linker will refuse to
preload the faker libraries into those executables. One of the
following warnings will be printed for each setuid/setgid executable
that the script tries to launch:
ERROR: ld.so: object 'libvglfaker.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored.
ld.so.1: warning: libvglfaker.so: open failed: No such file in secure directories
ld.so.1: warning: libdlfaker.so: open failed: No such file in secure directories
These are just warnings, and the setuid/setgid executables will continue
to run (without VirtualGL preloaded into them.) However, if you want to
get rid of the warnings, an easy way to do so is to simply edit the
application script and make it store the value of the
LD_PRELOAD
environment variable until right before the 3D
application executable is launched. For instance, consider the
following 3D application script:
#!/bin/sh setuid-executable 3D-application-executable
You could modify the script as follows:
#!/bin/sh LD_PRELOAD_SAVE=$LD_PRELOAD LD_PRELOAD= export LD_PRELOAD setuid-executable LD_PRELOAD=$LD_PRELOAD_SAVE export LD_PRELOAD 3D-application-executable
This procedure may be necessary to work around certain other interaction issues between VirtualGL and the launch scripts of specific 3D applications. See Application Recipes for more details.
If the 3D application that you are intending to run with VirtualGL is itself a setuid/setgid executable, then further steps are required. Otherwise, the 3D application will launch without VirtualGL preloaded into it. Forcing VirtualGL to be preloaded into setuid/setgid executables has security ramifications, so please be aware of these before you do it. By applying one of the following workarounds, you are essentially telling the operating system that you trust the security and stability of VirtualGL as much as you trust the security and stability of the operating system. While we’re flattered, we’re not sure that we’re necessarily deserving of that accolade, so if you are in a security-critical environment, apply the appropriate level of paranoia here.
To force VirtualGL to be preloaded into setuid/setgid executables on Linux, you have to first make sure that the faker libraries are installed in the system library path (usually /usr/lib, /usr/lib64, /usr/lib32, or /usr/lib/i386-linux-gnu). Next, make the faker libraries setuid executables. To do this, run the following commands as root:
chmod u+s /usr/lib/libvglfaker.so chmod u+s /usr/lib/libdlfaker.so
where lib
is lib
, lib64
,
lib32
, or lib/i386-linux-gnu
, depending on
your system.
On Solaris, you can force VirtualGL to be preloaded into setuid/setgid
executables by adding the VirtualGL library directories to the Solaris
“secure path.” Solaris keeps a tight lid on what goes into
/usr/lib and
/lib, and by default, it will only
allow libraries in those paths to be preloaded into an executable that
is setuid and/or setgid. Generally, 3rd party packages are forbidden
from installing anything into /usr/lib
or /lib, but you can use the
crle
utility to add other directories to the operating
system’s list of secure paths. In the case of VirtualGL, you
would execute one of the following commands (as root):
crle -u -s /opt/VirtualGL/lib32
crle -64 -u -s /opt/VirtualGL/lib64
VirtualBox is an enterprise-class, open source virtualization solution that supports hardware-accelerated OpenGL in Windows and Linux guests running on Windows, Mac, Linux, and Solaris/x86 hosts. 3D acceleration in VirtualBox is accomplished by installing a special driver in the guest that transmits OpenGL calls through a local connection to the VirtualBox process running on the host. When used in conjunction with VirtualGL on a Linux or Solaris/x86 host, this solution provides a means of displaying Windows 3D applications remotely.
To use VirtualGL with VirtualBox, perform the following procedures:
vglrun
.
VGL_TRACE
or vglrun +tr
) will
not work with VirtualBox unless the virtual machine is started from the
command line, using
vglrun VirtualBoxVM -startvm VM-name-or-UUID
VGL_LOGO
, do not work with VirtualBox.
VirtualGL can also be used with VMware Workstation, and the concept is basically the same as that of VirtualBox. As with VirtualBox, VMware uses a special driver in the guest O/S to intercept the OpenGL commands and marshal them to the host O/S, where VirtualGL picks them up.
To use VirtualGL with VMware Workstation, perform the following procedures:
vglrun
.
+pr
to vglrun
above) or set the
VGL_LOGO
environment variable to 1
in order to
verify that VirtualGL is loaded and working.
3D Application | Versions Known to Require Recipe | Platform | Recipe | Notes |
---|---|---|---|---|
Abaqus | v6 | Linux | It is necessary to add import os os.environ['ABAQUS_EMULATE_OVERLAYS'] = "1" to abaqus-install-dir/abaqus-version/site/abaqus_v6.env to make Abaqus v6 work properly with VirtualGL. If this is not done, then the application may fail to launch, it may fail to display any OpenGL-rendered pixels, or those pixels may become corrupted whenever other windows obscure them. |
VirtualGL does not support transparent overlays, since those cannot be rendered in an off-screen buffer. Setting ABAQUS_EMULATE_OVERLAYS to 1 causes the application to emulate overlay rendering instead of using actual transparent overlays. This workaround is known to be necessary when running Abaqus 6.9 and 6.10. |
Abaqus | v6 | Linux | vglrun -nodl abaqus-path/abaqus |
User reports indicate that Abaqus 6.9 will not work properly if VirtualGL’s dlopen() interposer (libdlfaker.so) is preloaded into it. This may be true for other versions of Abaqus as well. |
Cadence Allegro | v16.5 | Linux | vglrun +sync allegro |
Allegro relies on mixed X11/OpenGL rendering, and thus certain features (specifically the pcb_cursor_infinite cursor style) do not work properly unless VGL_SYNC is enabled. If VGL_SYNC is not enabled, then the crosshairs may remain on the screen. Since VGL_SYNC automatically enables the X11 transport and disables frame spoiling, it is highly recommended that you use an X proxy when VGL_SYNC is enabled. See Section 19.1 for further information. |
Animator | v4 | Linux | Comment out the line that reads unsetenv LD_PRELOAD in the a4 script, then launch Animator 4 using vglrun -ge a4 |
When the a4 script unsets LD_PRELOAD , this prevents VirtualGL from being loaded into the application. Animator 4 additionally checks the value of LD_PRELOAD and attempts to unset it from inside the application. Using vglrun -ge to launch the application fools Animator 4 into thinking that LD_PRELOAD is unset. |
ANSA | v12.1.0 | Linux | Add LD_PRELOAD_SAVE=$LD_PRELOAD export LD_PRELOAD= to the top of the ansa.sh script, then add export LD_PRELOAD=$LD_PRELOAD_SAVE just prior to the ${ANSA_EXEC_DIR}bin/ansa_linux${ext2} line. |
The ANSA startup script directly invokes /lib/libc.so.6 to query the glibc version. Since the VirtualGL faker libraries depend on libc, preloading VirtualGL when directly invoking libc.so.6 creates an infinite loop. Thus, it is necessary to disable the preloading of VirtualGL in the application script and then re-enable it prior to launching the actual application. |
AutoForm | v4.0x | All | vglrun +sync xaf_version |
AutoForm relies on mixed X11/OpenGL rendering, and thus certain features (particularly the “Dynamic Section” dialog and “Export Image” feature) do not work properly unless VGL_SYNC is enabled. Since VGL_SYNC automatically enables the X11 transport and disables frame spoiling, it is highly recommended that you use an X proxy when VGL_SYNC is enabled. See Section 19.1 for further information. |
Cedega | v6.0.x | Linux | Add export LD_PRELOAD=libvglfaker.so to the top of ~/.cedega/.winex_ver/winex-version/bin/winex3, then run Cedega as you would normally (without vglrun .) Since vglrun is not being used, it is necessary to use environment variables or the VirtualGL Configuration dialog to modify VirtualGL’s configuration. |
The actual binary (WineX) that uses OpenGL is buried beneath several layers of Python and shell scripts. The LD_PRELOAD variable does not get propagated down from the initial shell that invoked vglrun . |
Google Chrome/Chromium | v85 and later | Linux | vglrun google-chrome --in-process-gpu --use-gl=egl or vglrun chromium --in-process-gpu --use-gl=egl |
The --in-process-gpu option causes Chrome/Chromium to use a thread rather than a separate process for 3D rendering, which prevents it from complaining about the X11 function calls that the VirtualGL Faker makes. The --use-gl=egl option forces Chrome/Chromium to use desktop OpenGL rather than ANGLE, which works around an issue whereby, when using ANGLE, Chrome/Chromium assumes that every X visual has an EGL framebuffer configuration associated with it. |
Compiz | All | Linux | Set the VGL_WM environment variable to 1 prior to launching the window manager with vglrun , or pass an argument of +wm to vglrun . |
See Section 19.1 for further information. |
Mozilla Firefox | v93 and earlier | Linux | Set the MOZ_DISABLE_CONTENT_SANDBOX environment variable to 1 prior to launching the application with vglrun |
The content sandbox in Firefox v93 and earlier prevents VirtualGL from opening an X display connection to the 3D X server, which causes WebGL tabs to crash when using the GLX back end. Some users have reported that disabling the content sandbox is also necessary when using the EGL back end with certain GPUs. |
ANSYS Fluent (when launched from ANSYS Workbench) | v16 and later | Linux | Set the FLUENT_WB_OPTIONAL_ARGS environment variable to -driver opengl and the CORTEX_PRE environment variable to /opt/VirtualGL/bin/vglrun . |
If these environment variables are not set, Fluent will use software OpenGL when launched from ANSYS Workbench. |
Heretic II | All | Linux | vglrun heretic2 +set vid_ref glx |
|
ANSYS HFSS, ANSYS ICEM CFD, Roxar RMS | All | Linux | Set the VGL_SPOILLAST environment variable to 0 prior to launching the application with vglrun |
These applications draw node highlighting and/or rubber banding directly to the front buffer. In order for these front buffer operations to be displayed properly, it is necessary to use the “spoil first” frame spoiling algorithm whenever the application calls glFlush() . See Section 19.1 for further information. |
Intel OpenCL ICD | All | Linux | vglrun -ld path-to-Intel-OpenCL-libs application |
The Intel OpenCL installable client driver (ICD) is linked with a run-time library search path (rpath) of $ORIGIN , which would normally have the same effect as adding the directory in which the ICD is installed (default: /opt/intel/opencl/lib64 on 64-bit systems) to LD_LIBRARY_PATH . However, when VirtualGL is interposing the dlopen() function (which it does by default), this causes the actual dlopen() system calls to come from libdlfaker.so, so $ORIGIN will resolve to the directory in which the VirtualGL faker libraries are installed. This causes the dlopen() calls within the Intel ICD to fail, and because the ICD apparently does not check the return value of those calls, a segfault occurs. The workaround is simply to add the Intel ICD library path to LD_LIBRARY_PATH , which is most easily accomplished with vglrun -ld . |
Mathematica | v7 | Linux | Set the VGL_ALLOWINDIRECT environment variable to 1 prior to launching the application with vglrun . Note that VGL_ALLOWINDIRECT requires the GLX back end. |
Mathematica 7 will not draw the axis numbers on 3D charts correctly unless it is allowed to create an indirect OpenGL context. See Section 19.1 for further information. |
MATLAB | All | Linux | vglrun /usr/local/MATLAB/version/bin/matlab \ -nosoftwareopengl |
MATLAB will automatically use its built-in (unaccelerated) OpenGL implementation if it detects that it is running in a remote display environment. More specifically, it will always enable software OpenGL if the X server has an X extension called VNC-EXTENSION , which is the case with TurboVNC, TigerVNC, and RealVNC. |
PyTorch | All | Linux | vglrun -ld path-to-PyTorch-libs application or vglrun -nodl application |
The PyTorch module and its dependency libraries are linked with a run-time library search path (rpath) of $ORIGIN , which would normally have the same effect as adding the directory in which the module is installed (for instance, /usr/local/lib64/python3.6/site-packages/torch/lib) to LD_LIBRARY_PATH . However, when VirtualGL is interposing the dlopen() function (which it does by default), this causes the actual dlopen() system calls to come from libdlfaker.so, so $ORIGIN will resolve to the directory in which the VirtualGL faker libraries are installed. This causes the dlopen() calls within the PyTorch module to fail. The workaround is to add the PyTorch module path to LD_LIBRARY_PATH , which is most easily accomplished with vglrun -ld , or to disable VirtualGL’s dlopen() interposer. |
Tecplot 360 | 2011 and earlier | Linux | Set the VGL_GLFLUSHTRIGGER environment variable to 0 prior to launching the application with vglrun |
When running Tecplot 360 with VirtualGL in a high-performance X proxy, flashing artifacts will be produced when the user zooms/pans/rotates the scene, unless VirtualGL is instructed not to use glFlush() as a frame trigger. This has been fixed in Tecplot 2012 and later. See Section 19.1 for further information. |
Unity Hub / Unity Editor | v3.7.0 / v2021.X and later | Linux | vglrun unityhub --use-gl=egl |
The --use-gl=egl option forces Unity to use desktop OpenGL rather than ANGLE, which works around an issue whereby, when using ANGLE, Unity assumes that every X visual has an EGL framebuffer configuration associated with it. |
Stereographic rendering is a feature of OpenGL that creates separate rendering buffers for the left and right eyes and allows a 3D application to render a different image into each buffer. How the rendered stereo frames are subsequently displayed depends on the particulars of the GPU and the user’s environment. VirtualGL can support stereographic applications in one of two ways: (1) by sending the stereo image pairs to the VirtualGL Client to be displayed in stereo by the client’s GPU, or (2) by combining each stereo image pair into a single image that can be viewed with traditional anaglyphic 3D glasses or a passive stereo system, such as a 3D TV.
The name “quad-buffered stereo” refers to the fact that OpenGL uses four buffers (left front, right front, left back, and right back) to support stereographic rendering with double buffering. GPUs with quad-buffered stereo capabilities generally provide some sort of synchronization signal that can be used to control various types of active stereo 3D glasses. Some also support “passive stereo”, which requires displaying the left and right eye buffers to different monitor outputs. VirtualGL supports quad-buffered stereo by rendering the stereo images on the server and sending the image pairs across the network to be displayed on the client.
In most cases, VirtualGL does not require that a GPU be present in the client. However, a GPU is required to display stereo image pairs, so one must be present in any client that will use VirtualGL’s quad-buffered stereo feature. Since the GPU is only being used to draw images, it need not necessarily be a high-end GPU. Generally, the least expensive GPU that has stereo capabilities will work fine in the client. If using the GLX back end, the VirtualGL server must also have a GPU that supports stereo, since this is the only way that VirtualGL can create a stereo off-screen buffer.
When a 3D application tries to render something in stereo, VirtualGL will default to using quad-buffered stereo rendering if the 2D X server supports OpenGL and has stereo visuals available (not currently supported in Cygwin/X.) Otherwise, VirtualGL will fall back to using anaglyphic stereo (see below.) It is usually necessary to explicitly enable stereo in the graphics driver configuration for both the client and, if using the GLX back end, the VirtualGL server. The Troubleshooting section below lists a way to verify that both the 3D X server and the 2D X server have stereo visuals available.
In quad-buffered mode, VirtualGL reads back both the left and right eye
buffers on the server and sends the contents as a pair of compressed
images to the VirtualGL Client. The VirtualGL Client then decompresses
both images and draws them as a single stereo frame to the 2D X server
using glDrawPixels()
. It should thus be no surprise that
enabling quad-buffered stereo in VirtualGL decreases performance by 50%
or more and uses twice the network bandwidth to maintain the same frame
rate as mono.
Quad-buffered stereo requires the VGL Transport. Attempting to enable it with any other image transport will cause VGL to fall back to anaglyphic stereo mode.
Anaglyphic stereo is the type of stereographic display used by old 3D movies. It typically relies on a set of 3D glasses consisting of red transparency film over the left eye and cyan transparency film over the right eye, although green/magenta and blue/yellow schemes can be used as well. To generate a 3D anaglyph, one color channel from the left eye buffer is combined with the other two color channels from the right eye buffer, thus allowing a monographic frame to contain stereo data. For instance, in the case of red/cyan, the red channel is taken from the left eye buffer, and the green and blue channels are taken from the right eye buffer. From the point of view of VirtualGL, an anaglyphic rendered frame is the same as a monographic rendered frame, so anaglyphic frames can be sent using any image transport to any type of client, regardless of the client’s capabilities.
VirtualGL uses anaglyphic stereo if it detects that a 3D application has rendered something in stereo but quad-buffered stereo is not available, either because the client doesn’t support it or because a transport other than the VGL Transport is being used. Anaglyphic stereo provides a cheap and easy way to view stereographic applications in X proxies and on clients that do not support quad-buffered stereo. Additionally, anaglyphic stereo performs much faster than quad-buffered stereo, since it does not require sending twice the data to the client.
As with quad-buffered stereo, anaglyphic stereo requires that the VirtualGL server have stereo rendering capabilities if using the GLX back end. However, anaglyphic stereo does not require any 3D rendering capabilities (stereo or otherwise) on the client.
As with anaglyphic stereo, passive stereo combines a stereographic image pair into a single image (a “stereogram”), and thus it can be used with any image transport. However, unlike anaglyphic stereo, passive stereo must be used with specific display hardware, such as a 3D TV or monitor, that decodes the left and right eye images from the stereogram and sends them separately to a pair of 3D glasses (typically, this is accomplished by way of polarization.)
VirtualGL supports three methods of encoding stereograms:
Most 3D TVs/monitors can be configured to decode at least one of these types of stereograms. In order for this to work, however, the 3D drawing area must be full-screen.
A particular stereo mode can be selected by setting the
VGL_STEREO
environment variable or by using the
-st
argument to vglrun
. See Section
19.1 for more details.
VirtualGL includes a modified version of glxinfo
that can
be used to determine whether or not the 2D and 3D X servers have stereo
visuals enabled.
Run the following command sequence on the VirtualGL server to determine whether the 3D X server has a suitable visual for stereographic rendering:
xauth merge /etc/opt/VirtualGL/vgl_xauth_key /opt/VirtualGL/bin/glxinfo -display :n -c -v
(where n
is the display number of the 3D X
server.) One or more of the visuals should say stereo=1
and should list Pbuffer
as one of the
Drawable Types
.
Run the following command sequence on the VirtualGL server to determine whether the 2D X server has a suitable visual for stereographic rendering.
/opt/VirtualGL/bin/glxinfo -v
In order to use stereo, one or more of the visuals should say
stereo=1
.
The easiest way to uncover bottlenecks in VirtualGL’s image
pipeline is to set the VGL_PROFILE
environment variable to
1
on both server and client (passing an argument of
+pr
to vglrun
on the server has the same
effect.) This will cause VirtualGL to measure and report the throughput
of various stages in the pipeline. For example, here are some
measurements from a dual Pentium 4 server communicating with a Pentium
III client on a 100-megabit LAN:
Readback - 43.27 Mpixels/sec - 34.60 fps Compress 0 - 33.56 Mpixels/sec - 26.84 fps Total - 8.02 Mpixels/sec - 6.41 fps - 10.19 Mbits/sec (18.9:1)
Decompress - 10.35 Mpixels/sec - 8.28 fps Blit - 35.75 Mpixels/sec - 28.59 fps Total - 8.00 Mpixels/sec - 6.40 fps - 10.18 Mbits/sec (18.9:1)
The total throughput of the pipeline is 8.0 Megapixels/sec, or 6.4 frames/sec, indicating that our frame is 8.0 / 6.4 = 1.25 Megapixels in size (a little less than 1280 x 1024 pixels.) The readback and compress stages, which occur in parallel on the server, are obviously not slowing things down, and we’re only using 1/10 of our available network bandwidth. Looking at the client, however, we discover that its slow decompression speed (10.35 Megapixels/second) is the primary bottleneck. Decompression and blitting on the client cannot be done in parallel, so the aggregate performance is the harmonic mean of the decompression and blitting rates: [1/ (1/10.35 + 1/35.75)] = 8.0 Mpixels/sec. In this case, we could improve the performance of the whole system by simply using a client with a faster CPU.
This example is meant to demonstrate how the client can sometimes be the primary impediment to VirtualGL’s end-to-end performance. Using “modern” hardware in both the server and client, VirtualGL can easily stream 50+ Megapixels/sec across a LAN, as of this writing.
By default, VirtualGL will only transport a frame if the image transport is ready to receive it. If VirtualGL detects that the 3D application has finished rendering a new frame but there are already frames waiting in the queue to be transported, then those untransported frames are dropped (“spoiled”), and the new frame is promoted to the head of the queue. This prevents a backlog of frames on the server, which would cause a perceptible delay in the responsiveness of interactive 3D applications. However, when running non-interactive 3D applications (particularly benchmarks), frame spoiling should always be disabled. With frame spoiling disabled, the server will render frames only as quickly as VirtualGL can transport those frames, which will conserve server resources as well as allow OpenGL benchmarks to accurately measure the end-to-end performance of VirtualGL. With frame spoiling enabled, OpenGL benchmarks will report meaningless data, since the rate at which the server can render frames is decoupled from the rate at which VirtualGL can transport those frames.
In most X proxies (including VNC), there is effectively another layer of frame spoiling, since the rate at which the X proxy can send frames to the client is decoupled from the rate at which VirtualGL can draw rendered frames into the X proxy. Thus, even if frame spoiling is disabled in VirtualGL, OpenGL benchmarks will still report inaccurate data if they are run in such X proxies. TCBench, described below, provides a limited solution to this problem.
To disable frame spoiling, set the VGL_SPOIL
environment
variable to 0
on the VirtualGL server or pass an argument
of -sp
to vglrun
. See Section
19.1 for further information.
VirtualGL includes several tools that can be useful for diagnosing performance problems with the system.
NetTest is a network benchmark that uses the same network I/O classes as
VirtualGL. It can be used to test the latency and throughput of any
TCP/IP connection. nettest
is installed in
/opt/VirtualGL/bin by default. For
Windows users, a native Windows version of NetTest is included in the
VirtualGL-Utils package, which is
distributed alongside VirtualGL.
To use NetTest, first start up the NetTest server on one end of the connection:
nettest -server
Next, start the NetTest client on the other end of the connection:
nettest -client server
Replace server
with the hostname or IP address of
the machine on which the NetTest server is running.
The NetTest client will produce output similar to the following:
TCP transfer performance between localhost and server: Transfer size 1/2 Round-Trip Throughput Throughput (bytes) (msec) (MBytes/sec) (Mbits/sec) 1 0.093402 0.010210 0.085651 2 0.087308 0.021846 0.183259 4 0.087504 0.043594 0.365697 8 0.088105 0.086595 0.726409 16 0.090090 0.169373 1.420804 32 0.093893 0.325026 2.726514 64 0.102289 0.596693 5.005424 128 0.118493 1.030190 8.641863 256 0.146603 1.665318 13.969704 512 0.205092 2.380790 19.971514 1024 0.325896 2.996542 25.136815 2048 0.476611 4.097946 34.376065 4096 0.639502 6.108265 51.239840 8192 1.033596 7.558565 63.405839 16384 1.706110 9.158259 76.825049 32768 3.089896 10.113608 84.839091 65536 5.909509 10.576174 88.719379 131072 11.453894 10.913319 91.547558 262144 22.616389 11.053931 92.727094 524288 44.882406 11.140223 93.450962 1048576 89.440702 11.180592 93.789603 2097152 178.536997 11.202160 93.970529 4194304 356.754396 11.212195 94.054712
We can see that the throughput peaks at about 94 megabits/sec, which is pretty good for a 100-megabit connection. We can also see that, for small transfer sizes, the round-trip time is dominated by latency. The “latency” is the same thing as the one-way (1/2 round-trip) transit time for a zero-byte packet, which is about 93 microseconds in this case.
CPUstat is available only for Linux and is installed in the same place
as NetTest (/opt/VirtualGL/bin by
default.) It measures the average, minimum, and peak usage for all CPU
cores combined and for each CPU core individually. On Windows, this
same functionality is provided in the Windows Performance Monitor, which
is part of the operating system. On Solaris, the same data can be
obtained using the vmstat
program.
CPUstat measures the CPU usage over a given sample period (a few seconds) and continuously reports how much each CPU core was utilized since the last sample period. Output for a particular sample looks something like this:
ALL : 51.0 (Usr= 47.5 Nice= 0.0 Sys= 3.5) / Min= 47.4 Max= 52.8 Avg= 50.8 cpu0: 20.5 (Usr= 19.5 Nice= 0.0 Sys= 1.0) / Min= 19.4 Max= 88.6 Avg= 45.7 cpu1: 81.5 (Usr= 75.5 Nice= 0.0 Sys= 6.0) / Min= 16.6 Max= 83.5 Avg= 56.3
The first column indicates what percentage of time the CPU core was
active since the last sample period. This is then broken down into what
percentage of time the CPU core spent running user, nice, and
system/kernel code. ALL
indicates the average utilization
across all CPU cores since the last sample period. Min
,
Max
, and Avg
indicate a running minimum,
maximum, and average of all samples since CPUstat was started.
Generally, if a 3D application’s CPU usage is fairly steady, then
you can run CPUstat for a bit and wait for the Max
and
Avg
values in the ALL
category to stabilize,
and that will tell you the application’s peak and average CPU
utilization.
TCBench was born out of the need to compare VirtualGL’s performance to that of other thin client software, some of which had frame spoiling features that could not be disabled. TCBench measures the frame rate of a thin client system as seen from the client’s point of view. It does this by attaching to one of the windows on the client and continuously reading back a small area at the center of the window. While this may seem to be a somewhat non-rigorous test, experiments have shown that, if care is taken to ensure that the 3D application is updating the center of the window with every frame (such as in a spin animation), TCBench can produce quite accurate results. It has been sanity checked with VirtualGL’s internal profiling mechanism and with a variety of system-specific techniques, such as monitoring redraw events on the client’s windowing system.
TCBench is installed in
/opt/VirtualGL/bin by default. For
Windows users, a native Windows version of TCBench is included in the
VirtualGL-Utils package, which is
distributed alongside VirtualGL. Run tcbench
from the
command line, and it will prompt you to click in the window you want to
benchmark. That window should already have an automated animation of
some sort running before you launch TCBench. Note that GLXSpheres (see
below) is an ideal benchmark to use with TCBench, since GLXSpheres draws
a new sphere to the center of its window every time it renders a frame.
tcbench -?
lists the relevant command-line arguments, which can be used to adjust the benchmark time, the sampling rate, and the x and y offset of the sampling area within the window.
GLXSpheres is a benchmark that produces very similar images to nVidia’s (long-discontinued) SphereMark benchmark. In the early days of VirtualGL’s existence, it was discovered (quite by accident) that SphereMark was a pretty good test of VirtualGL’s end-to-end performance, because that benchmark generated images with about the same proportion of solid color, and similar frequency components, to the images generated by volume visualization applications.
Thus, the goal of GLXSpheres was to create an open source Un*x version of SphereMark (SphereMark was for Windows only) completely from scratch. GLXSpheres does not use any code from the original benchmark, but it does attempt to mimic the visual output of the original as closely as possible. GLXSpheres lacks some of the advanced rendering features of the original, such as the ability to use vertex arrays, but since GLXSpheres was primarily designed as a benchmark for VirtualGL, display lists are more than fast enough for that purpose.
GLXSpheres has some additional modes that its predecessor lacked, modes that are designed specifically to test the performance of various VirtualGL features:
glxspheres -s
)glxspheres -m
)glxspheres -m
over a remote X connection, then run
vglrun -sp glxspheres -m
over the same
connection and compare. Immediate mode does not use display lists, so
when immediate-mode OpenGL is rendered indirectly (over a remote X
connection), this causes every OpenGL command to be sent as a separate
network request to the X server … with every frame. Many 3D
applications do not use display lists– because the geometry they
are rendering is dynamic, or for other reasons– so this test
models how such applications might perform when displayed remotely
without VirtualGL.
glxspheres -i
)vglrun glxspheres -i
) with the non-interactive
frame rate (vglrun -sp glxspheres
) allows you to
quantify the effect of network latency on the performance of interactive
applications in a VirtualGL environment.
GLXSpheres is installed in
/opt/VirtualGL/bin by default. 64-bit
VirtualGL builds name this program glxspheres64
so as to
allow both a 64-bit and a 32-bit version of GLXSpheres to be installed
on the same system.
This version of VirtualGL also provides an EGL/X11 equivalent of GLXSpheres (EGLXSpheres), which works identically to GLXSpheres except for the absense of modes (including stereographic rendering) that the EGL API does not support.
Several of VirtualGL’s operational parameters can be changed on the fly once a 3D application has been launched. This is accomplished by using the VirtualGL Configuration dialog, which can be popped up by holding down the Ctrl and Shift keys and pressing the F9 key while any one of the 3D application’s windows is active. This displays the following dialog box:
You can use this dialog to adjust various image compression and display parameters in VirtualGL. Changes are communicated immediately to VirtualGL.
VGL_COMPRESS=proxy
. This option can be activated at any
time, regardless of which transport was active when VirtualGL started.
VGL_COMPRESS=jpeg
. This option is only available if the
VGL Transport was active when VirtualGL started. VGL_COMPRESS=rgb
. This option is only available if the VGL
Transport was active when VirtualGL started. VGL_COMPRESS=xv
.
This option is only available if the 2D X server has the X Video
extension and the X Video implementation supports the YUV420P (AKA
“I420”) image format. VGL_COMPRESS=yuv
.
This option is only available if the 2D X server has the X Video
extension, the X Video implementation supports the YUV420P (AKA
“I420”) image format, and the VGL Transport was active when
VirtualGL started. VGL_COMPRESS
configuration option.
If an image transport plugin is loaded, then this menu’s name changes to “Image Compression”, and it has options “0” through “10”.
VGL_SUBSAMP=gray
VGL_SUBSAMP=1x
VGL_SUBSAMP=2x
VGL_SUBSAMP=4x
VGL_SUBSAMP
configuration option.
If an image transport plugin is loaded, then this menu has two additional options, “8X” and “16X”.
VGL_QUAL
. See Section
19.1 for more information about the
VGL_QUAL
configuration option.
If an image transport plugin is loaded, then this gadget’s name changes to “Image Quality”.
VGL_GAMMA
. This enables VirtualGL’s internal gamma
correction system with the specified gamma correction factor. See
Section 19.1 for more information
about the VGL_GAMMA
configuration option.
VGL_SPOIL
.
See Sections 17.2 and
19.1 for more information about the
VGL_SPOIL
configuration option.
VGL_INTERFRAME
. See Section
19.1 for more information
about the VGL_INTERFRAME
configuration option.
VGL_STEREO=left
.
VGL_STEREO=right
VGL_STEREO=quad
VGL_STEREO=rc
VGL_STEREO=gm
VGL_STEREO=by
VGL_STEREO=i
VGL_STEREO=tb
VGL_STEREO=ss
VGL_STEREO
configuration option.
VGL_FPS
. See Section
19.1 for more information about the
VGL_FPS
configuration option.
You can set the VGL_GUI
environment variable to change the
key sequence used to pop up the VirtualGL Configuration dialog. If the
default of ctrl-shift-f9
is not suitable, then set
VGL_GUI
to any combination of ctrl
,
shift
, alt
, and one of f1
,
f2
, ..., f12
(these are not case sensitive.)
For example:
export VGL_GUI=ctrl-f9
will cause the dialog box to pop up whenever Ctrl and F9 are pressed.
To disable the VirtualGL dialog altogether, set VGL_GUI
to
none
.
VirtualGL monitors the 3D application’s X event loop to determine whenever a particular key sequence has been pressed. If a 3D application is not monitoring key press events in its X event loop, then the VirtualGL Configuration dialog might not pop up at all. There is unfortunately no workaround for this, but it should be a rare occurrence.
You can control the operation of the VirtualGL faker libraries in four different ways. Each method of configuration takes precedence over the previous method:
export VGL_XXX=whatever
)
vglrun
.
This effectively overrides any previous environment variable setting
corresponding to that configuration option.
Image transport plugins are free to handle or ignore any configuration option as they see fit.
Environment Variable | VGL_ALLOWINDIRECT = 0 | 1 |
Summary | When using the GLX back end, allow 3D applications to request an indirect OpenGL context |
Image Transports | All |
Default Value | 0 (all OpenGL contexts use direct rendering) |
glReadPixels()
can perform very slowly if an indirect
OpenGL context is used.) VGL_ALLOWINDIRECT
to 1
will cause VirtualGL to
honor the application’s request for an indirect OpenGL context.
EGL does not support indirect OpenGL contexts, so this option requires the GLX back end.
Environment Variable | VGL_CLIENT = {c} |
vglrun argument |
-cl {c} |
Summary | {c} = the hostname or IP address of the client |
Image Transports | VGL, Custom (if supported) |
Default Value | Automatically set by vglconnect or vglrun |
VGL_CLIENT
should be set to
the hostname or IP address of the machine on which the VirtualGL Client
is running. Normally, VGL_CLIENT
is set automatically when
executing vglconnect
or vglrun
, so don’t
override it unless you know what you’re doing.
Environment Variable | VGL_COMPRESS = proxy | jpeg | rgb | xv | yuv |
vglrun argument |
-c proxy | jpeg | rgb | xv | yuv |
Summary | Set image transport and image compression type |
Image Transports | All |
Default Value | (See description) |
proxy
= Send rendered frames in uncompressed form using
the X11 Transport. This is useful when displaying to a 2D X server or
X proxy on the VirtualGL server (see Section
9.1.) jpeg
= Compress rendered frames using JPEG and send them
using the VGL Transport. This is useful when displaying to a 2D X
server on a machine other than the VirtualGL server (see Chapter
8.) rgb
= Encode rendered frames as uncompressed RGB and send
them using the VGL Transport. This is useful when displaying to a 2D X
server or X proxy on a machine that is connected to the VirtualGL
server by a very fast network (see Section
9.2.) xv
= Encode rendered frames as YUV420P (planar YUV with 4X
chrominance subsampling) and display them to the 2D X server using the
XV Transport. This transport is designed for use with X proxies that
support the X Video extension (see Chapter
10.) yuv
= Encode rendered frames as YUV420P, send them using
the VGL Transport, and display them to the 2D X server using the X
Video extension. This greatly reduces the CPU usage on both server and
client and uses only about half the network bandwidth as RGB, but the
use of 4X chrominance subsampling does produce some visible artifacts
(see Chapter 10.)
VGL_COMPRESS
is not specified, then the
default is set as follows: DISPLAY
environment variable begins with :
or unix:
,
then VirtualGL assumes that the 2D X server is on the same machine and
uses proxy
compression by default. jpeg
compression by default.
If an image transport plugin is being used, then you can set VGL_COMPRESS
to any numeric value >= 0 (Default value = 0
.) The plugin can choose to respond to this value as it sees fit.
Environment Variable | VGL_DISPLAY = {d} |
vglrun argument |
-d {d} |
Summary | {d} = the X display/screen or EGL device to use for 3D rendering |
Image Transports | All |
Default Value | :0 |
VGL_DISPLAY
to (or invoking
vglrun -d
with) :0.1
would cause
VirtualGL to use the GLX back end and redirect all of the OpenGL
rendering from the 3D application to a GPU attached to Screen 1 on X
display :0. Setting VGL_DISPLAY
to (or invoking
vglrun -d
with) a DRI device path (such as
/dev/dri/card0
) or an EGL device ID (such as
egl0
) would cause VirtualGL to use the EGL back end and
redirect all of the OpenGL rendering from the 3D application to the
specified EGL device. /opt/VirtualGL/bin/eglinfo -e
lists all valid EGL device IDs and their associated DRI device paths.
Environment Variable | VGL_EGLLIB = {l} |
Summary | {l} = the location of an alternate EGL library |
Image Transports | All |
eglGetProcAddress()
function in the EGL library against which it or the 3D application was
linked (usually libEGL.so.1, in the
system library path), and VGL will use that function to load any other
“real” EGL functions that it needs to call
(“real” as opposed to the “fake”, or
“interposed”, versions of those functions that VirtualGL
provides, which often modify the arguments or perform other operations
before calling the “real” functions.) You can use the
VGL_EGLLIB
environment variable to specify the path of a
dynamic library from which VirtualGL should load “real” EGL
functions. Environment Variable | VGL_EXCLUDE = {d1}[,{d2},{d3},...] |
Summary | {d1}[,{d2},{d3},...] = A comma-separated list of X displays/screens for which the VirtualGL Faker should be bypassed |
Image Transports | All |
Default Value | None |
VGL_EXCLUDE
environment variable
specifies a list of X display names (for instance, :0.1
)
for which VirtualGL should not interpose any X11, GLX, EGL, OpenGL, XCB,
or OpenCL calls. In other words, VirtualGL treats these displays as 3D
X servers instead of 2D X servers and does not attempt to redirect 3D
rendering away from them. When an X display connection is opened using
XOpenDisplay()
, VirtualGL checks if the display name
appears in the exclude list, and if so, all subsequent X11, GLX, EGL,
OpenGL, and XCB calls intended for that display are allowed to pass
through unimpeded. This variable is re-checked every time
XOpenDisplay()
is called, so it can be set dynamically from
within a 3D application.
Environment Variable | VGL_EXITFUNCTION = exit | _exit | abort |
Summary | Specify the function that the VirtualGL Faker should call when a non-recoverable error occurs |
Image Transports | All |
Default Value | exit |
exit()
when a
non-recoverable error occurs. However, that may not be appropriate for
multithreaded applications that statically instantiate objects at the
global scope, because calling exit()
can cause global
objects to be cleaned up before the threads that use them are
terminated. Calling _exit()
instead of exit()
causes the application to exit immediately without cleaning up global
objects, and calling abort()
instead of exit()
allows a core dump to be obtained.
Environment Variable | VGL_FAKEOPENCL = 0 | 1 |
vglrun argument |
-ocl / +ocl |
Summary | Disable/enable OpenCL interposer |
Image Transports | All |
Default Value | Disabled |
clCreateContext()
function and modify its arguments before passing them to the
“real” clCreateContext()
function in libOpenCL.
Since libOpenCL is not available on all platforms that VirtualGL
supports, the OpenCL interposer is disabled by default.
Environment Variable | VGL_FAKEXCB = 0 | 1 |
vglrun argument |
-xcb / +xcb |
Summary | Disable/enable XCB interposer |
Image Transports | All |
Default Value | Enabled |
Environment Variable | VGL_FORCEALPHA = 0 | 1 |
Summary | Force the off-screen buffers used for 3D rendering to have an alpha channel |
Image Transports | All |
Default Value | 0 (honor the 3D application’s choice of visual attributes) |
VGL_FORCEALPHA
to 1
causes VirtualGL to always
create off-screen buffers with alpha channels. This means that a
32-bit-per-pixel (BGRA) off-screen buffer will be created if the
application requests a 24-bit-per-pixel visual. VGL_FORCEALPHA
might be necessary in order to use PBO
readback mode with the aforementioned GPUs (as of this writing, nVidia
GeForce adapters are known to require this.) See the
VGL_READBACK
option for further information.
Environment Variable | VGL_FPS = {f} |
vglrun argument |
-fps {f} |
Summary | Limit the end-to-end frame rate to {f} frames/second, where {f} is a floating point number > 0.0 |
Image Transports | VGL, X11, XV, Custom (if supported) |
Default Value | 0.0 (No limit) |
VGL_FPS
effectively limits the
server’s 3D rendering frame rate as well.
Environment Variable | VGL_GAMMA = {g} |
vglrun argument |
-gamma {g} |
Summary | {g} = gamma correction factor |
Image Transports | All |
Default Value | 1.00 (no gamma correction) |
VGL_GAMMA
is set
to an arbitrary floating point value, then VirtualGL will perform gamma
correction on all of the rendered frames from the 3D application, using
the specified value as the gamma correction factor. You can also
specify a negative value to apply a “de-gamma” function.
Specifying a gamma correction factor of G (where G < 0) is equivalent
to specifying a gamma correction factor of -1/G.
Environment Variable | VGL_GLFLUSHTRIGGER = 0 | 1 |
Summary | Disable/enable using glFlush() as a frame trigger function |
Default Value | Enabled |
glFlush()
is a sort of “asynchronous
synchronization” command. It flushes the OpenGL command buffers,
which generally has the effect of ensuring that the commands have been
delivered to the GPU. However, unlike glFinish()
,
glFlush()
does not wait until the commands have been
rendered before it returns. glFlush()
can vary widely from application to application.
When doing front buffer rendering, some 3D applications call
glFlush()
after each object is rendered. Some call it only
at the end of the frame. Others call glFlush()
much more
often, even as frequently as every time a few primitives are rendered.
This creates problems for VirtualGL, since it has to guess the
application’s intent. Not all 3D applications that use front
buffer rendering call glFinish()
to signal the end of a
frame, so VirtualGL cannot usually get away with ignoring
glFlush()
. However, some 3D applications call
glFlush()
so often that VirtualGL cannot get away with
reading back/transporting a frame every time glFlush()
is
called, either (see
VGL_SPOILLAST
for more information on how VirtualGL tries to handle this, under normal
circumstances.) glFlush()
very liberally and intend for it to be an
intermediate rather than a final synchronization command. Such
applications will call glFinish()
after a sequence of
glFlush()
calls, so for those applications, using
glFlush()
as a frame trigger is a waste of resources and
can sometimes create visual artifacts (for instance, if the application
clears the front buffer with a particular color, calls
glFlush()
, then clears it again with another color. We
wouldn’t mention it if it hadn’t happened before.) For such
applications, setting VGL_GLFLUSHTRIGGER
to 0
should make them display properly with VirtualGL. See
Application
Recipes for a list of 3D applications that are
known to require this.
Environment Variable | VGL_GLLIB = {l} |
Summary | {l} = the location of an alternate OpenGL library |
Image Transports | All |
glXGetProcAddress()
or
glXGetProcAddressARB()
function in the OpenGL library
against which it or the 3D application was linked (usually
libGL.so.1, in the system library
path), and VGL will use that function to load any other
“real” OpenGL or GLX functions that it needs to call
(“real” as opposed to the “fake”, or
“interposed”, versions of those functions that VirtualGL
provides, which often modify the arguments or perform other operations
before calling the “real” functions.) You can use the
VGL_GLLIB
environment variable to specify the path of a
dynamic library from which VirtualGL should load “real” GLX
and OpenGL functions. Environment Variable | VGL_GUI = {k} |
Summary | {k} = the key sequence used to pop up the VirtualGL Configuration dialog, or none to disable the dialog |
Image Transports | All |
Default Value | ctrl-shift-f9 |
VGL_GUI
to some combination of shift
,
ctrl
, alt
, and one of f1
,
f2
, ..., f12
. You can also set
VGL_GUI
to none
to disable the configuration
dialog altogether. See Chapter
18 for more details.
Environment Variable | VGL_INTERFRAME = 0 | 1 |
Summary | Disable or enable interframe comparison |
Image Transports | VGL (JPEG, RGB), Custom (if supported) |
Default Value | Enabled |
VGL_INTERFRAME
to 0
disables
this behavior.
When using the VGL Transport, interframe comparison is affected by the VGL_TILESIZE
option
Environment Variable | VGL_LOG = {l} |
Summary | Redirect all messages from VirtualGL to a log file specified by {l} |
Image Transports | All |
Default Value | Print all messages to stderr |
Environment Variable | VGL_LOGO = 0 | 1 |
Summary | Disable or enable the display of a VGL logo in the 3D window(s) |
Image Transports | All |
Default Value | Disabled |
VGL_LOGO
to 1
will cause VirtualGL to
add a small logo to the bottom right-hand corner of all of the rendered
frames from the 3D application. This is meant as a debugging tool to
allow users to determine whether or not VirtualGL is active.
Environment Variable | VGL_NPROCS = {n} |
vglrun argument |
-np {n} |
Summary | {n} = the number of threads to use for compression/encoding |
Image Transports | VGL (JPEG, RGB), Custom (if supported) |
Default Value | 1 |
When using the VGL Transport, multithreaded compression is affected by the VGL_TILESIZE
option
Environment Variable | VGL_OCLLIB = {l} |
Summary | {l} = the location of an alternate OpenCL library |
Image Transports | All |
Default Value | libOpenCL.so.1 in the system library path |
Environment Variable | VGL_PORT = {p} |
vglrun argument |
-p {p} |
Summary | {p} = the TCP port to use when connecting to the VirtualGL Client |
Image Transports | VGL, Custom (if supported) |
Default Value | Read from X property stored by VirtualGL Client |
Environment Variable | VGL_PROFILE = 0 | 1 |
vglrun argument |
-pr / +pr |
Summary | Disable/enable profiling output |
Image Transports | VGL, X11, XV, Custom (if supported) |
Default Value | Disabled |
Environment Variable | VGL_QUAL = {q} |
vglrun argument |
-q {q} |
Summary | {q} = the JPEG compression quality, 1 <= {q} <= 100 |
Image Transports | VGL (JPEG), Custom (if supported) |
Default Value | 95 |
If using an image transport plugin, then this setting need not necessarily correspond to JPEG image quality. The plugin can choose to respond to the VGL_QUAL
option as it sees fit.
Environment Variable | VGL_READBACK = none | pbo | sync |
Summary | Specify the method used by VirtualGL to read back the rendered frames from the GPU |
Image Transports | All |
Default Value | pbo |
none
= Do not read back the rendered frames at all. On
rare occasions, it might be desirable to have VirtualGL redirect OpenGL
rendering from an application’s window into an off-screen buffer
but not automatically read back and transport the rendered frames. Some
3D applications have their own mechanisms for reading back the rendered
frames, so setting VGL_READBACK=none
disables
VirtualGL’s readback mechanism and prevents duplication of effort.
VGL_DISPLAY
and
VGL_READBACK
to each MPI process, it is possible to make
all of the ParaView server processes render to off-screen buffers on
different GPUs while preventing VirtualGL from displaying any pixels
except those generated by Process 0. pbo
= PBO readback mode. Attempt to use pixel buffer
objects (PBOs) to read back the rendered frames from the GPU. A PBO is
an opaque memory buffer managed by OpenGL, so it can be locked down for
direct DMA transfers. This improves readback performance as well as
makes the readback operation non-blocking. Because PBOs are managed
buffers, VirtualGL has to perform an additional memory copy to transfer
a rendered frame out of the PBO and into the image transport’s
buffer. However, on high-end GPUs, PBO readback mode will still
generally perform better than synchronous readback mode, even with this
additional memory copy. Furthermore, since the non-blocking nature of
PBO readback reduces the load on the GPU, PBOs can improve performance
dramatically when multiple simultaneous users are sharing a
professional-grade GPU. VGL_FORCEALPHA
option to 1
could alleviate the issue. sync
= Synchronous readback mode. This disables the use of
PBOs altogether, which causes VirtualGL to always use blocking
readbacks. VGL_VERBOSE=1
will cause
VirtualGL to print the current readback mode being used, as well as the
pixel format requested by the readback operation and the pixel format of
the off-screen buffer. Additionally, a notification will be printed if
VirtualGL falls back from PBO readback mode to synchronous readback mode.
Environment Variable | VGL_REFRESHRATE = {r} |
Summary | {r} = the “virtual” refresh rate, in Hz, for the GLX_EXT_swap_control and GLX_SGI_swap_control extensions and the eglSwapInterval() function |
Image Transports | All |
Default Value | 60.0 |
GLX_EXT_swap_control
and
GLX_SGI_swap_control
extensions and the
eglSwapInterval()
function allow applications to specify
that buffer swaps should be synchronized with the refresh rate of the
monitor. When one of the aforementioned extensions or the
aforementioned function is used, glXSwapBuffers()
or
eglSwapBuffers()
will not return until a specified number
of refreshes (the “swap interval”) has occurred. Although
refresh rate has no meaning when rendering into an off-screen buffer,
VirtualGL uses an internal timer to emulate the refresh rate so that 3D
applications can control their own frame rate. (This is often used by
games, for instance, in which maintaining a constant frame rate is
important.) Setting VGL_REFRESHRATE
changes the interval
of VirtualGL’s internal timer.
Environment Variable | VGL_SAMPLES = {s} |
vglrun argument |
-ms {s} |
Summary | Force OpenGL multisampling to be enabled with {s} samples ({s} = 0 to force OpenGL multisampling to be disabled) |
Image Transports | All |
Default Value | Allow the 3D application to determine the level of multisampling |
__GL_FSAA_MODE
environment variable) do not
work with off-screen buffers and, consequently, do not work with
VirtualGL. If VGL_SAMPLES
is > 0, then VirtualGL will
attempt to create off-screen buffers with the specified number (or a
greater number) of samples. This effectively forces the 3D application
to render with the specified multisampling level, as if the application
had explicitly passed attributes of
GLX_SAMPLES,{s}
to glXChooseVisual()
or EGL_SAMPLES,{s}
to
eglChooseConfig()
. If VGL_SAMPLES
is
0
, then VirtualGL forces multisampling to be disabled, even
if the 3D application explicitly tries to enable it.
Multisampling cannot be used with Pixmap rendering. Any application that uses Pixmap rendering will fail if VGL_SAMPLES
is set to a value other than 0.
Environment Variable | VGL_SPOIL = 0 | 1 |
vglrun argument |
-sp / +sp |
Summary | Disable/enable frame spoiling |
Image Transports | VGL, X11, XV, Custom (if supported) |
Default Value | Enabled |
Environment Variable | VGL_SPOILLAST = 0 | 1 |
Summary | Disable/enable “spoil last” frame spoiling algorithm for frames triggered by glFlush() |
Image Transports | VGL, X11, XV, Custom (if supported) |
Default Value | Enabled |
glXSwapBuffers()
. When frame spoiling is enabled and the
image transport is busy transporting a frame, the newly-rendered frame
is normally promoted to the head of the queue, and the rest of the
frames in the queue are “spoiled” (discarded.) This
algorithm, called “spoil first”, ensures that when a frame
is actually transported (rather than spoiled), the transported frame
will be the most recently rendered frame. However, this algorithm
requires that VirtualGL read back every frame that the application
renders, even if the frame is ultimately discarded. glFlush()
many thousands of times per
frame while rendering to the front buffer. Thus, VirtualGL’s
default behavior is to use a different spoiling algorithm, “spoil
last”, to process frames triggered by glFlush()
calls. “Spoil last” discards the most recently rendered
frame if the image transport is busy. Thus, the only frames that are
read back are the frames that are actually transported. However, there
is no guarantee in this case that the transported frame will be the most
recently rendered frame, so applications that perform front buffer
rendering and call glFlush()
in response to an interactive
operation may not display properly. For such applications, setting the
VGL_SPOILLAST
environment variable to 0
prior
to launching the application with vglrun
will cause the
“spoil first” algorithm to be used for all frame triggers,
including glFlush()
. This should fix the display problem,
at the expense of increased load on the GPU (because VirtualGL is now
reading back the rendered frame every time glFlush()
is
called.) See Application
Recipes for a list of 3D applications that are
known to require this.
Environment Variable | VGL_STEREO = left | right | quad | rc | gm | by | i | tb | ss |
vglrun argument |
-st left | right | quad | rc | gm | by | i | tb | ss |
Summary | Specify the delivery method for stereo frames |
Image Transports | All |
Default Value | quad |
left
= When a 3D application renders a stereo frame, read
back and transport only the left eye buffer right
= When a 3D application renders a stereo frame, read
back and transport only the right eye buffer quad
= Attempt to use quad-buffered stereo, which will
result in a pair of images being transported for every rendered frame.
Using quad-buffered stereo requires the VGL Transport (or a transport
plugin that can handle stereo image pairs.) Using quad-buffered stereo
with the VGL Transport also requires that the 2D X server support
OpenGL and be connected to a GPU that supports stereo rendering. The
2D X server should additionally be configured to export stereo visuals.
Quad-buffered stereo is not supported when using the VGL Transport with
YUV encoding. If quad-buffered stereo is requested but the transport
or the client does not support it, then VirtualGL will fall back to
using Red/Cyan (anaglyphic) stereo. rc
= Use
Red/Cyan (anaglyphic) stereo, even if quad-buffered is available
gm
= Use Green/Magenta (anaglyphic) stereo,
even if quad-buffered is available by
= Use
Blue/Yellow (anaglyphic) stereo, even if quad-buffered is available
i
= Use Interleaved (passive) stereo, even if
quad-buffered is available tb
= Use
Top/Bottom (passive) stereo, even if quad-buffered is available
ss
= Use Side-by-Side (passive) stereo, even
if quad-buffered is available Environment Variable | VGL_SUBSAMP = gray | 1x | 2x | 4x | 8x | 16x |
vglrun argument |
-samp gray | 1x | 2x | 4x | 8x | 16x |
Summary | Specify the level of chrominance subsampling in the JPEG compressor |
Image Transports | VGL (JPEG), Custom (if supported) |
Default Value | 1x |
In the digital world, the terms “YCbCr” and “YUV” are often used interchangeably. Per the convention of the image processing and digital video communities, we use “YCbCr” when discussing JPEG compression and “YUV” when discussing video formats, but they are really the same thing.
1x
=
no chrominance subsampling 2x
= discard the
chrominance components for every other pixel along the image’s X
direction (this is also known as “4:2:2” or
“2:1” subsampling.) All else being equal, 2x subsampling
generally reduces the image size by about 20-25% when compared to no
subsampling. 4x
= discard the chrominance
components for every other pixel along both the X and Y directions of
the image (this is also known as “4:2:0” or
“2:2” subsampling.) All else being equal, 4x subsampling
generally reduces the image size by about 35-40% when compared to no
subsampling. 8x
= discard the chrominance
components for 3 out of every 4 pixels along the image’s X
direction and half the pixels along the image’s Y direction (this
is also known as “4:1:0” or “4:2” subsampling.)
This option is available only when using an image transport plugin
that supports it. 16x
= discard the
chrominance components for 3 out of every 4 pixels along both the X and
Y directions of the image (this is also known as “4:4”
subsampling.) This option is available only when using an image
transport plugin that supports it. gray
=
discard all chrominance components. This is useful when running 3D
applications (such as medical visualization applications) that are
already generating grayscale images.
If using an image transport plugin, then this setting need not necessarily correspond to JPEG chrominance subsampling. How the plugin responds to the VGL_SUBSAMP
option is implementation-specific.
Environment Variable | VGL_SYNC = 0 | 1 |
vglrun argument |
-sync / +sync |
Summary | Disable/enable strict 2D/3D synchronization |
Image Transports | VGL, X11, XV, Custom (if supported) |
Default Value | Disabled |
XGetImage()
or other X11
functions to obtain a bitmap of the pixels that were rendered by OpenGL.
Enabling VGL_SYNC
is a somewhat extreme measure that may be
needed to make such applications display properly with VirtualGL. It was
developed initially as a way to pass the GLX conformance suite
(conformx
, specifically), but at least one commercial
application is known to require it as well (see
Application
Recipes.) VGL_SYNC
is enabled, every call to a frame trigger function
will cause VirtualGL to read back the rendered frame and
synchronously draw it into the 3D application’s window
using the X11 Transport with no frame spoiling. The frame
trigger function will not return control to the 3D application until
VirtualGL has verified that the rendered frame has been composited into
the application’s window. Therefore, this mode can have
potentially dire effects on performance when used with a 2D X server on
a machine other than the VirtualGL server. It is strongly recommended
that VGL_SYNC
be used only in conjunction with an X proxy
running on the VirtualGL server.
If an image transport plugin is being used, then VirtualGL does not automatically enable the X11 Transport or disable frame spoiling when VGL_SYNC
is set. This allows the plugin to handle synchronous image delivery as it sees fit (or to simply ignore this option.)
Environment Variable | VGL_TILESIZE = {t} |
Summary | {t} = the image tile size ({t} x {t} pixels) to use for multithreaded compression and interframe comparison (8 <= {t} <= 1024) |
Image Transports | VGL (JPEG, RGB), Custom (if supported) |
Default Value | 256 |
VGL_NPROCS
.)
Environment Variable | VGL_TRACE = 0 | 1 |
vglrun argument |
-tr / +tr |
Summary | Disable/enable tracing |
Image Transports | All |
Default Value | Disabled |
Environment Variable | VGL_TRANSPORT = {t} |
vglrun argument |
-trans {t} |
Summary | Use an image transport plugin |
Default Value | None |
Environment Variable | VGL_TRAPX11 = 0 | 1 |
Summary | Disable/enable VirtualGL’s X11 error handler |
Image Transports | All |
Default Value | Disabled |
VGL_TRAPX11
option
causes VirtualGL to install its own X11 error handler, which prints a
warning message but allows the application to continue running.
Environment Variable | VGL_VERBOSE = 0 | 1 |
vglrun argument |
-v / +v |
Summary | Disable/enable verbose VirtualGL messages |
Image Transports | All |
Default Value | Disabled |
Environment Variable | VGL_WM = 0 | 1 |
vglrun argument |
-wm / +wm |
Summary | Disable/enable window manager mode |
Image Transports | All |
Default Value | Disabled |
Environment Variable | VGL_X11LIB = {l} |
Summary | {l} = the location of an alternate X11 library |
Image Transports | All |
VGL_X11LIB
environment variable to specify the path of a
dynamic library from which VirtualGL should load “real” X11
functions. Environment Variable | VGL_XCBLIB = {l} |
Summary | {l} = the location of an alternate XCB library |
Image Transports | All |
Default Value | libxcb.so.1 in the system library path |
Environment Variable | VGL_XCBATOMLIB = {l} |
Summary | {l} = the location of an alternate xcb-atom library |
Image Transports | All |
Default Value | libxcb-atom.so.0 or libxcb-atom.so.1 in the system library path |
Environment Variable | VGL_XCBGLXLIB = {l} |
Summary | {l} = the location of an alternate xcb-glx library |
Image Transports | All |
Default Value | libxcb-glx.so.0 in the system library path |
Environment Variable | VGL_XCBKEYSYMSLIB = {l} |
Summary | {l} = the location of an alternate xcb-keysyms library |
Image Transports | All |
Default Value | libxcb-keysyms.so.0 or libxcb-keysyms.so.1 in the system library path |
Environment Variable | VGL_XCBX11LIB = {l} |
Summary | {l} = the location of an alternate X11-xcb library |
Image Transports | All |
Default Value | libX11-xcb.so.1 in the system library path |
Environment Variable | VGL_XVENDOR = {v} |
Summary | {v} = a fake X11 vendor string to return when the 3D application calls XServerVendor() or ServerVendor() |
Image Transports | All |
These settings control the VirtualGL Client, which is used only with the
VGL Transport. vglclient
is normally launched
automatically from vglconnect
and should not require any
further configuration except in exotic circumstances. These settings
are meant only for advanced users or those wishing to build additional
infrastructure around VirtualGL.
Environment Variable | VGLCLIENT_DRAWMODE = ogl | x11 |
vglclient argument |
-gl / -x |
Summary | Specify the API used to composite the rendered frames into the 3D application’s windows |
Default Value | x11 |
Environment Variable | VGLCLIENT_IPV6 = 0 | 1 |
vglclient argument |
-ipv6 |
Summary | Disable/enable IPv6 sockets |
Default Value | Disabled |
Environment Variable | VGLCLIENT_PORT = {p} |
vglclient argument |
-port {p} |
Summary | {p} = TCP port on which to listen for connections from the VirtualGL Faker |
Default Value | Automatically select a free port |
Environment Variable | VGL_PROFILE = 0 | 1 |
Summary | Disable/enable profiling output |
Default Value | Disabled |
Environment Variable | VGL_VERBOSE = 0 | 1 |
Summary | Disable/enable verbose VirtualGL messages |
Default Value | Disabled |