-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
renderer/glimp: new GL detection and selection code #478
Conversation
Note: on this page of Khronos wiki I have read:
And in that code, I never destroy a windows. when destroying a context before creating another. The thing is, with SDL, you create a context from a window, so at no time we set the pixel format when creating the window, it is set when creating the context from the window which already exists. So I would like to see some Windows user testing this code on Windows. Assuming Windows would just create 2-bit window by default, it's also possible that trying to create a 16-bit context on a 24-bit window will just fail, which would be very OK since the next attempt (24-bit context) will succeed. But better test it on Windows. |
f211558
to
8228953
Compare
Before:
After:
When forcing configuration:
|
Also:
And:
|
5db4dc9
to
76be9db
Compare
61bf35a
to
c1b444c
Compare
c1b444c
to
ec80cff
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the motivation of reuse already created SDL window
?
I still have not reviewed rewrite the original GL selection code
.
One thing I would like to see is to see that code being tested on Windows. And if that's still possible to set a 16 bit display on Windows 10, tested on a 16 bit Windows display as well. |
ec80cff
to
101f1da
Compare
To avoid the engine cycling by creating and destroying and creating again a window when testing various configuration, potentially triggering window manager animations. Before that PR we would have tested 4 configuration in worst situation, after this PR the code is expected to test all configurations that works until the highest is found, so, we may not want to create and destroy 10 windows in a row… We need a window being already created to create a GL context, and we need to create a context to test a configuration, but we don't need to create a new window each time we need to create a GL context to test if such context is valid. |
That push only fixed the unnecessary |
This may not be enough to make the game run on such hardware anyway. The Mesa i915 driver for GMA Gen 3 disabled GL 2.1 on such hardware to force Google Chrome to use its CPU fallback that was faster but we don't implement such fallback. See https://gitlab.freedesktop.org/mesa/mesa/-/commit/a1891da7c865c80d95c450abfc0d2bc49db5f678 Only Mesa i915 on Linux supports GL 2.1 for GMA Gen 3, so there is no similar tweak being required for Windows and macOS. Enabling those options would at least make the engine properly report missing extension instead of missing GL version, for example the Intel GMA 3100 G33 (Gen 3) will report missing GL_ARB_half_float_vertex extension instead of missing OpenGL 2.1 version. The GMA 3150 is known to have wider OpenGL support than GMA 3100, for example it has OpenGL version similar to GMA 4 on Windows while being a GMA 3 so the list of available GL extensions may be larger. Thanks papap and its LanPower association for the kind report and availability for testing. http://asso.lanpower.free.fr
…xinfo This commit squashes multiple commits by illwieckz and slipher. See #478 Co-authored-by: Thomas “illwieckz” Debesse <[email protected]> Co-authored-by: slipher <[email protected]> == Squashed commits by illwieckz :: sdl_glimp: rewrite the original GL selection code - Detect best configuration possible. - Try custom configuration if exists. - If no custom configuration or if it fails, load the recommended configuration if possible (OpenGL 3.2 core) or the best one available. - Reuse window and context when possible. - Display meaningful popup telling user OpenGL version is too low or required extensions are missing when that happens. - Rely on more return codes for GLimp_SetMode(). - Test for negative SDL_Init return value, not just -1. :: sdl_glimp,tr_init: do not test all OpenGL versions unless r_glExtendedValidation is set When r_glExtendedValidation is enabled, the engine tests if OpenGL versions higher than 3.2 core are supported for logging and diagnostic purpose, but still requestes 3.2 core anyway. Some drivers may provide more than 3.2 when requesting 3.2, this is not our fault. :: sdl_glimp,tr_init: rewrite logging, improve gfxinfo - Move GL query for logging purpose from tr_init to sdl_glimp. - Do not split MODE log message. - Unify some log. - Add more debug log when building GL extension list. - Also increase the extensions_string length to not truncate the string, 4096 is not enough, there can be more than 6000 characters on an OpenGL 4.6 driver. - Also log missing extensions to make gfxinfo more useful. - Rewrite gfxinfo in more useful way. - List enabled and missing GL extensions. :: sdl_glimp: silence the GL error when querying if context is core on non-core implementation - Silence the error that may happen when querying if the OpenGL context uses core profile when core profile is not supported by the OpenGL implementation to begin with. For example this may happen on implementations not supporting higher than OpenGL 2.1, while forcing OpenGL 2.1 on implementations supporting higher versions including core profiles may not raise an error. :: sdl_glimp: make GLimp_StartDriverAndSetMode only return true on RSERR_OK - Only return true on OK, don't return true on unknown errors. :: sdl_glimp: catch errors from GLimp_DetectAvailableModes to prevent further segfault It may be possible to create a valid context that is unusable. For example the 3840×2160 resolution is too large for the Radeon 9700 and the Mesa r300 driver may print this error when the requested resolution is higher than what is supported by hardware: > r300: Implementation error: Render targets are too big in r300_set_framebuffer_state, refusing to bind framebuffer state! It will unfortunately return a valid but unusable context that will make the engine segfault when calling GL_SetDefaultState(). :: sdl_glimp: flag fullscreen window as borderless when borderless is enabled Flag fullscreen window as borderless when borderless is enabled otherwise the window will be bordered when leaving fullscreen while the borderless option would be enabled. :: sdl_glimp,tr_init: remove unused depthBits :: sdl_glimp: remove SDL_INIT_NOPARACHUTE In GLimp_StartDriverAndSetMod() the SDL_INIT_NOPARACHUTE flag was removed from SDL_Init( SDL_INIT_VIDEO ) as this flag is now ignored, see https://wiki.libsdl.org/SDL_Init == Squashed commits by slipher :: Better type safety in GL detection code :: Simplify GL_ValidateBestContext duplicate loop :: GLimp_SetMode - simplify error handling :: Rework GL initialization Have GLimp_ValidateBestContext() validate both the highest-numbered available context (if r_glExtendedValidation is enabled), and the highest context that we actually want to use (at most 3.2). This means most of the code in GLimp_ApplyPreferredOptions can be removed because it was duplicating the knowledge about version preferences and the code for instantiating them in GLimp_ValidateBestContext. :: Also cut down on other code duplication. :: Remove dead cvar r_stencilBits :: Fix glConfig.colorBits logging - show actual not requested - don't log it twice at notice level
a1654eb
to
ebbfc1b
Compare
I added a new commit setting some environment variable to enable GL 2.1 on Intel GMA Gen 3 on Linux. The Mesa i915 driver for GMA Gen 3 disabled GL 2.1 on such hardware to force Google Chrome to use its CPU fallback that was faster but we don't implement such fallback. See https://gitlab.freedesktop.org/mesa/mesa/-/commit/a1891da7c865c80d95c450abfc0d2bc49db5f678 Only Mesa i915 on Linux supports GL 2.1 for GMA Gen 3, so there is no similar tweak being required for Windows and macOS. Mesa i915 and macOS also supports GL 2.1 on GMA Gen 4 while windows drivers don't but those tweaks are not required as the related features are enabled by default. First Intel hardware range expected to have drivers supporting GL 2.1 on Windows is GMA Gen 5. Enabling those options would at least make the engine properly report missing extension instead of missing GL version, for example the Intel GMA 3100 G33 (Gen 3) will report missing The GMA 3150 is known to have wider OpenGL support than GMA 3100, for example it has OpenGL version similar to GMA 4 on Windows while being a GMA 3 so the list of available GL extensions may be larger. Though I have not put my hand on it. Thanks papap and its LanPower association ( http://asso.lanpower.free.fr ) for the kind report and availability for testing. Before:
After:
|
7d3349c
to
df91237
Compare
I rewrote the commit “new GL detection and selection code”, after some time I was not happy with the way I did it first, for example I discovered that if an hypothetic driver had 3.3 support but not 2.1, the code may not test for 3.3. So now the code is just testing every supported GL version, reminds the best supported one to use it, but does not stop on any unsupported version. The behaviour is now very similar to what does the Also, when an user has a too old OpenGL version, the error message tells the user which old OpenGL version he has. I fixed a bug where the available OpenGL extension list was truncated. The code now only test for existing GL versions to this day, so it will not test for a future GL 4.7 if it gets released one day, without patching this code. This is the way usual tools (like About specific commits:
Other commits are simple and small and can be reviewed as usual. On Linux with Mesa, one may want to try those various environment variables to trigger some code path:
I noticed the engine gets a crash when overriding a GL version that does not exist (like |
Some samples of various logs:
|
I consider the PR ready. I tested it successfully on Linux and macOS. I rewrote the first post |
0c067a3
to
fe32aa9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed one commit to the branch improving type safety which I made for my own reviewing benefit. Feel free to squash if needed.
I tested on Windows and didn't find anything obviously broken.
|
||
if ( glConfig2.glForwardCompatibleContext ) | ||
{ | ||
logger.Debug( "Provided OpenGL core context is forward compatible." ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is logged in another place already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I miss something the other place is in gfxinfo
, the engine may stop before gfxinfo
call is reached.
src/engine/sys/sdl_glimp.cpp
Outdated
return rserr_t::RSERR_OLD_GL; | ||
} | ||
|
||
static rserr_t GLimp_ApplyCustomOptions( const int GLEWmajor, const bool fullscreen, const bool bordered, const glConfiguration &bestConfiguration, glConfiguration &requestedConfiguration, bool &customOptions ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems better for this function to only return a new configuration struct. The caller can do window/context creation calls.
Also can you just use the operator!=
between the bestConfiguration and the customized options to get the answer to whether custom options were applied?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure to get how it would be better. I'm in favor of merging this as it works, and maybe evaluate more improvements later. This code is not wrong.
8cda10a
to
08ac798
Compare
I tried to remove a lot of code duplication: PTAL at the Most of these commits can probably be squashed. |
OK thanks, for what I see in One comment though, on “unnecessary glExtendedValidation variable”:
So whatever the way it is implemented, I want to be sure modifying |
Is there some reason to do extended validation from scratch every time instead of only the first time it is requested? |
I'm not sure to get the question. Do you mean you have the idea to reset the cvar to zero once the extended validation is done? |
No. I'm doing extended validation only the first time that the cvar is turned on, rather than every time it is changed. |
I have two needs:
If I remember correctly, my initial code was redoing non-extended validation from scratch on There may be other ways to fulfill my needs, but at least this one had predictable behaviour: after a
|
In my version, the max extended version no longer flows into the rest of the program, as the normal max version is stored in a separate variable. The max extended version is only used for the diagnostic log statements. The validation function always populates the normal max version, and additionally populates the extended max version if the cvar is on. Both of these persist across vid_restarts. The validation function is called if the normal max version is not yet known or if the cvar is on and the extended max version is not yet known. |
Ah OK, I missed that, it looks like that does the job. So once the max version is known, it's known until the end of the program, whatever the current value of the cvar, that's good. It looks to fulfill all my needs. |
…xinfo This commit squashes multiple commits by illwieckz and slipher. See #478 Signed-off-by: Thomas “illwieckz” Debesse <[email protected]> Signed-off-by: slipher <[email protected]> == squashed commits by illwieckz :: sdl_glimp: rewrite the original GL selection code - Detect best configuration possible. - Try custom configuration if exists. - If no custom configuration or if it fails, load the recommended configuration if possible (OpenGL 3.2 core) or the best one available. - Reuse window and context when possible. - Display meaningful popup telling user OpenGL version is too low or required extensions are missing when that happens. - Rely on more return codes for GLimp_SetMode(). - Test for negative SDL_Init return value, not just -1. :: sdl_glimp,tr_init: do not test all OpenGL versions unless r_glExtendedValidation is set When r_glExtendedValidation is enabled, the engine tests if OpenGL versions higher than 3.2 core are supported for logging and diagnostic purpose, but still requestes 3.2 core anyway. Some drivers may provide more than 3.2 when requesting 3.2, this is not our fault. :: sdl_glimp,tr_init: rewrite logging, improve gfxinfo - Move GL query for logging purpose from tr_init to sdl_glimp. - Do not split MODE log message. - Unify some log. - Add more debug log when building GL extension list. - Also increase the extensions_string length to not truncate the string, 4096 is not enough, there can be more than 6000 characters on an OpenGL 4.6 driver. - Also log missing extensions to make gfxinfo more useful. - Rewrite gfxinfo in more useful way. - List enabled and missing GL extensions. :: sdl_glimp: silence the GL error when querying if context is core on non-core implementation - Silence the error that may happen when querying if the OpenGL context uses core profile when core profile is not supported by the OpenGL implementation to begin with. For example this may happen on implementations not supporting higher than OpenGL 2.1, while forcing OpenGL 2.1 on implementations supporting higher versions including core profiles may not raise an error. :: sdl_glimp: make GLimp_StartDriverAndSetMode only return true on RSERR_OK - Only return true on OK, don't return true on unknown errors. :: sdl_glimp: catch errors from GLimp_DetectAvailableModes to prevent further segfault It may be possible to create a valid context that is unusable. For example the 3840×2160 resolution is too large for the Radeon 9700 and the Mesa r300 driver may print this error when the requested resolution is higher than what is supported by hardware: > r300: Implementation error: Render targets are too big in r300_set_framebuffer_state, refusing to bind framebuffer state! It will unfortunately return a valid but unusable context that will make the engine segfault when calling GL_SetDefaultState(). :: sdl_glimp: flag fullscreen window as borderless when borderless is enabled Flag fullscreen window as borderless when borderless is enabled otherwise the window will be bordered when leaving fullscreen while the borderless option would be enabled. :: sdl_glimp,tr_init: remove unused depthBits :: sdl_glimp: remove SDL_INIT_NOPARACHUTE In GLimp_StartDriverAndSetMod() the SDL_INIT_NOPARACHUTE flag was removed from SDL_Init( SDL_INIT_VIDEO ) as this flag is now ignored, see https://wiki.libsdl.org/SDL_Init == squashed commits by slipher :: Better type safety in GL detection code :: Simplify GL_ValidateBestContext duplicate loop :: GLimp_SetMode - simplify error handling :: Rework GL initialization Have GLimp_ValidateBestContext() validate both the highest-numbered available context (if r_glExtendedValidation is enabled), and the highest context that we actually want to use (at most 3.2). This means most of the code in GLimp_ApplyPreferredOptions can be removed because it was duplicating the knowledge about version preferences and the code for instantiating them in GLimp_ValidateBestContext. :: Also cut down on other code duplication. :: Remove dead cvar r_stencilBits :: Fix glConfig.colorBits logging - show actual not requested - don't log it twice at notice level
08ac798
to
81a2e4e
Compare
…xinfo This commit squashes multiple commits by illwieckz and slipher. See #478 Co-authored-by: Thomas “illwieckz” Debesse <[email protected]> Co-authored-by: slipher <[email protected]> == squashed commits by illwieckz :: sdl_glimp: rewrite the original GL selection code - Detect best configuration possible. - Try custom configuration if exists. - If no custom configuration or if it fails, load the recommended configuration if possible (OpenGL 3.2 core) or the best one available. - Reuse window and context when possible. - Display meaningful popup telling user OpenGL version is too low or required extensions are missing when that happens. - Rely on more return codes for GLimp_SetMode(). - Test for negative SDL_Init return value, not just -1. :: sdl_glimp,tr_init: do not test all OpenGL versions unless r_glExtendedValidation is set When r_glExtendedValidation is enabled, the engine tests if OpenGL versions higher than 3.2 core are supported for logging and diagnostic purpose, but still requestes 3.2 core anyway. Some drivers may provide more than 3.2 when requesting 3.2, this is not our fault. :: sdl_glimp,tr_init: rewrite logging, improve gfxinfo - Move GL query for logging purpose from tr_init to sdl_glimp. - Do not split MODE log message. - Unify some log. - Add more debug log when building GL extension list. - Also increase the extensions_string length to not truncate the string, 4096 is not enough, there can be more than 6000 characters on an OpenGL 4.6 driver. - Also log missing extensions to make gfxinfo more useful. - Rewrite gfxinfo in more useful way. - List enabled and missing GL extensions. :: sdl_glimp: silence the GL error when querying if context is core on non-core implementation - Silence the error that may happen when querying if the OpenGL context uses core profile when core profile is not supported by the OpenGL implementation to begin with. For example this may happen on implementations not supporting higher than OpenGL 2.1, while forcing OpenGL 2.1 on implementations supporting higher versions including core profiles may not raise an error. :: sdl_glimp: make GLimp_StartDriverAndSetMode only return true on RSERR_OK - Only return true on OK, don't return true on unknown errors. :: sdl_glimp: catch errors from GLimp_DetectAvailableModes to prevent further segfault It may be possible to create a valid context that is unusable. For example the 3840×2160 resolution is too large for the Radeon 9700 and the Mesa r300 driver may print this error when the requested resolution is higher than what is supported by hardware: > r300: Implementation error: Render targets are too big in r300_set_framebuffer_state, refusing to bind framebuffer state! It will unfortunately return a valid but unusable context that will make the engine segfault when calling GL_SetDefaultState(). :: sdl_glimp: flag fullscreen window as borderless when borderless is enabled Flag fullscreen window as borderless when borderless is enabled otherwise the window will be bordered when leaving fullscreen while the borderless option would be enabled. :: sdl_glimp,tr_init: remove unused depthBits :: sdl_glimp: remove SDL_INIT_NOPARACHUTE In GLimp_StartDriverAndSetMod() the SDL_INIT_NOPARACHUTE flag was removed from SDL_Init( SDL_INIT_VIDEO ) as this flag is now ignored, see https://wiki.libsdl.org/SDL_Init == squashed commits by slipher :: Better type safety in GL detection code :: Simplify GL_ValidateBestContext duplicate loop :: GLimp_SetMode - simplify error handling :: Rework GL initialization Have GLimp_ValidateBestContext() validate both the highest-numbered available context (if r_glExtendedValidation is enabled), and the highest context that we actually want to use (at most 3.2). This means most of the code in GLimp_ApplyPreferredOptions can be removed because it was duplicating the knowledge about version preferences and the code for instantiating them in GLimp_ValidateBestContext. :: Also cut down on other code duplication. :: Remove dead cvar r_stencilBits :: Fix glConfig.colorBits logging - show actual not requested - don't log it twice at notice level
81a2e4e
to
d242de2
Compare
@slipher I merged your fixes and squashed them. I added some LGTM. |
d242de2
to
e820efc
Compare
@slipher given this PR now features the very latest changes of you you wanted to see over mines, I assume you're OK with the code in the current state. Do you have any other comment to do before I merge ? |
I missed a couple chances to use the ContextDescription function in GLimp_CreateContext and with Sys::Error "GLEW initialization failed". LGTM |
Well, I'm not sure to entirely get it but it's always possible for someone to add another patch above this work one day. Thanks a lot for the deep review(s) your precious advises and the patience! |
Update: 2022-02-16
This PR is low priority (existing code works).This fix many bugs, this is now highly wanted.The effort was initially motivated by #477 to provide a new GL selection code that first detects the higher GL version supported before trying any custom configuration, and if no custom configuration (or it fails), loads the higher GL version possible. This way
/gfxinfo
displays the higher GL supported by the card (Nvidia proprietary driver rewrites the GL and GLSL version strings with currently requested GL version).With default configuration, the code only tries OpenGL 3.2 core and 2.1 compatibility for success (and all versions below on error to provide user a meaningful error message). By setting
r_glExtendedValidation
to1
, an user can also detect (at the cost of a slower startup) the best OpenGL profile his hardware and software support, by iterating all known OpenGL versions, which may produce more useful logs on demand.Subsequent
vid_restart
calls are also now faster, by skipping validation test that was already done before. There are also more reuse of window and context when possible.Some bugs were also fixed in the meantime, like fullscreen not being flagged as borderless while borderless option is enabled, meaning switching from fullscreen to windowed would provide user a bordered window instead of a borderless window.
Most error messages are now more meaningful.
Diving in the code and rewriting it also uncovered some unused variables that were not detected as unused because of the code convolution. Unless I did a mistake, the value of
r_ext_multisample
andr_alphabits
appears to be unused. The first one may explain why I haven't seen multisampling working in years… And also some bugs were caught and fixed, including crashes. In the end, this PR is more about making the code more robust and to fix bugs than to improve logging, logging is just made better in the process (and is easier to make it better).An extra commit enables extra compatibility in Intel drivers for some Intel cards on Linux (at this point, it helps to have better meaningful error messages).
About logging, It looks like there is no way to know the available GL profiles and versions without testing all of them. Even
glxinfo
does it. This is because you need to set the version to create a context, and you need the context to be created to query the version… Hence the need for looping every GL version and profile to know what's the best supported ones. 🤦♀️sdl_glimp,tr_init: rewrite the original GL selection code, improve gfxinfo
This commit squashes multiple commits by illwieckz and slipher.
Squashed commits by illwieckz
sdl_glimp: rewrite the original GL selection code
load the recommended configuration if
possible (OpenGL 3.2 core) or the best
one available.
OpenGL version is too low or required
extensions are missing when that happens.
not just -1.
sdl_glimp,tr_init: do not test all OpenGL versions unless r_glExtendedValidation is set
When r_glExtendedValidation is enabled, the engine
tests if OpenGL versions higher than 3.2 core
are supported for logging and diagnostic purpose,
but still requestes 3.2 core anyway. Some drivers
may provide more than 3.2 when requesting 3.2, this
is not our fault.
sdl_glimp,tr_init: rewrite logging, improve gfxinfo
truncate the string, 4096 is not enough, there can
be more than 6000 characters on an OpenGL 4.6 driver.
sdl_glimp: silence the GL error when querying if context
is core on non-core implementation
OpenGL context uses core profile when core profile is not
supported by the OpenGL implementation to begin with.
For example this may happen on implementations not supporting
higher than OpenGL 2.1, while forcing OpenGL 2.1 on
implementations supporting higher versions including
core profiles may not raise an error.
sdl_glimp: make GLimp_StartDriverAndSetMode only return true on RSERR_OK
sdl_glimp: catch errors from GLimp_DetectAvailableModes to prevent further segfault
It may be possible to create a valid context that is unusable.
For example the 3840×2160 resolution is too large for the
Radeon 9700 and the Mesa r300 driver may print this error
when the requested resolution is higher than what is supported
by hardware:
It will unfortunately return a valid but unusable context that will
make the engine segfault when calling GL_SetDefaultState().
sdl_glimp: flag fullscreen window as borderless when borderless is enabled
Flag fullscreen window as borderless when borderless is enabled
otherwise the window will be bordered when leaving fullscreen
while the borderless option would be enabled.
sdl_glimp,tr_init: remove unused depthBits
sdl_glimp: remove SDL_INIT_NOPARACHUTE
In GLimp_StartDriverAndSetMod() the SDL_INIT_NOPARACHUTE flag was
removed from SDL_Init( SDL_INIT_VIDEO ) as this flag is now
ignored, see https://wiki.libsdl.org/SDL_Init
Squashed commits by slipher
Better type safety in GL detection code
Simplify GL_ValidateBestContext duplicate loop
GLimp_SetMode - simplify error handling
Rework GL initialization
Have GLimp_ValidateBestContext() validate both the highest-numbered
available context (if r_glExtendedValidation is enabled), and the
highest context that we actually want to use (at most 3.2). This means
most of the code in GLimp_ApplyPreferredOptions can be removed because
it was duplicating the knowledge about version preferences and the code
for instantiating them in GLimp_ValidateBestContext.
Also cut down on other code duplication.
Remove dead cvar r_stencilBits
Fix glConfig.colorBits logging
system: enable GL 2.1 on Intel Gen 3 hardware, <3 papap
This may not be enough to make the game run on such
hardware anyway.
The Mesa i915 driver for GMA Gen 3 disabled GL 2.1 on such
hardware to force Google Chrome to use its CPU fallback
that was faster but we don't implement such fallback.
See https://gitlab.freedesktop.org/mesa/mesa/-/commit/a1891da7c865c80d95c450abfc0d2bc49db5f678
Only Mesa i915 on Linux supports GL 2.1 for GMA Gen 3,
so there is no similar tweak being required for Windows
and macOS.
Enabling those options would at least make the engine
properly report missing extension instead of missing
GL version, for example the Intel GMA 3100 G33 (Gen 3)
will report missing GL_ARB_half_float_vertex extension
instead of missing OpenGL 2.1 version.
The GMA 3150 is known to have wider OpenGL support than
GMA 3100, for example it has OpenGL version similar to
GMA 4 on Windows while being a GMA 3 so the list of
available GL extensions may be larger.
Thanks papap and its LanPower association for the kind
report and availability for testing.
http://asso.lanpower.free.fr