Fixes an issue where primitives such as text could end up with sharp
edges when creating temporary contexts:
void paint (Graphics& g)
{
g.fillAll (Colours::white);
const auto preferredType = g.getInternalContext().getPreferredImageTypeForTemporaryImages();
Image img (Image::ARGB, getWidth(), getHeight(), false, *preferredType);
{
Graphics g2 (img);
}
g.setColour (Colours::black);
g.setFont (32);
g.drawText ("test", getLocalBounds(), Justification::centred);
}
A change introduced in 00836d1e94 meant
that GL renderers could sometimes assert in
StateHelpers::ActiveTextures::bindTexture() when ending a transparency
layer.
Specifically, the issue was provoked by adding the ScopedTextureBinding
in the constructor of OpenGLFrameBuffer. This reset the bound texture
after creating a new transparency layer.
I think the previous implementation worked by accident, not by design.
It so happens that when rendering multiple transparency layers within
the same frame (i.e. calling begin/end several times in that order,
*not* nesting the calls), the same texture ID will generally get reused.
From the graphics context's (GC's) perspective, we create a texture with
ID "2", then call bindTexture() to bind it, and the GC remembers that
this ID is bound. We draw the frame, and the texture gets destroyed. The
call to glDeleteTextures() will cause the default texture, "0", to be
bound if the texture being destroyed is bound at the point of
destruction. So, after the texture is destroyed, the GC's stored binding
no longer reflects reality, since texture "0" is now bound.
The next time we create a texture, that texture also gets created with
ID "2". Previously, we would leave this texture bound after
construction, but now we re-bind the previously-bound texture, "0". This
causes the assertion in bindTexture() to fire when we next attempt to
bind texture "2", since the actual bound texture is "0".
The solution added in this patch will bind texture "0" when the
transparency layer image is destroyed, in order to keep the GC's view of
the GL context consistent with the real state.
Fixes a regression introduced in
bd26d79b17
The issue was observed in the DemoRunner when enabling the OpenGL
renderer and then switching to the LookAndFeel V1.
The cause of the problem was the creation of a secondary OpenGL-backed
Graphics instance in the DropShadowEffect. This temporary context could
modify the OpenGL context state without restoring the state
appropriately on destruction. As a result, when the outer long-lived
OpenGL context resumed drawing, properties such as the viewport, bound
shader, shader uniform values, and bound framebuffer could all be
incorrect.
Previously, code such as the following (where MyGLComponent is rendering
using an OpenGLContext) could result in GL errors, as the destruction of
the inner context unbound the array buffer and element array buffer
after use, instead of setting them to the previous values set up by the
outer context.
Additionally, a VAO was only set up in the OpenGLContext, so rendering
into a GL-backed LowLevel graphics context could fail if no VAO was
bound.
void MyGLComponent::paint (juce::Graphics& g)
{
juce::Image image { juce::Image::PixelFormat::ARGB, width, height, false, juce::OpenGLImageType() };
{
juce::Graphics innerContext { image };
// draw into innerContext
}
g.drawImage (image, juce::Rectangle<float> { width, height });
}
The OpenGL renderer uses ColourGradient::createLookupTable to generate
gradient textures. However, the tweening method used was different to
the tweening used by CoreGraphics gradients, and by the software
renderer.
Gradient tweening is now computed using non-premultiplied colours, to
ensure consistency between gradients rendered using OpenGL, and with
other renderers.
- 4.1 and 4.3 contexts can now be requested
- The requested context version is no longer ignored on Linux
- Debugging contexts are now enabled in Debug builds with GL 4.3
- Fixes a bug where glEnable(GL_TEXTURE_2D) was called in core profiles
Previously, it wasn't safe to access Font instances from multiple
threads because there was a chance that they might reference the same
shared internal state. In this case, calling getTypeface() or getAscent from
two threads simultaneously would cause a race on the typeface and ascent
data members, even though the Font instances appeared to be disjoint.
With this change in place, it is now safe to use Font instances from
multiple threads simultaneously.
It is still an error to modify the same Font instance from multiple
threads without synchronization!
// Fine:
Font a;
Font b = a;
auto futureA = std::async (std::launch::async, [&a] { /* do something with a */ });
auto futureB = std::async (std::launch::async, [&b] { /* do something with b */ });
// Bad idea:
Font f;
auto futureA = std::async (std::launch::async, [&f] { /* do something with f */ });
auto futureB = std::async (std::launch::async, [&f] { /* do something with f */ });