GPU drawing using ShaderEffects in QtQuick

A ShaderEffect is a QML item that takes a GLSL shader program allowing applications to render using the GPU directly. Using only property values as input as with the Canvas in our previous article, we will show how a ShaderEffect can be used to generate a different kind visual content, with even better performances. We will also see how we can use the fluidity it provides in user interface designs, again taking Google's Material Design as a concrete example.

Quick introduction

The fragment (pixel) shader

This can be a difficult topic, but all you need to know for now is that correctly typed QML properties end up in your shader's uniform variables of the same name and that the default vertex shader will output (0, 0) into the qt_TexCoord0 varying variable for the top-left corner and (1, 1) at the bottom-right. Since different values of the vertex shader outputs will be interpolated into the fragment shader program inputs, each fragment will receive a different qt_TexCoord0 value, ranging from (0, 0) to (1, 1). The fragment shader will rasterize our rectangular geometry by running once for every viewport pixel it intersects and the output value of gl_FragColor will then be blended onto the window according to its alpha value.

This article won't be talking about the vertex shader, the default one will do fine in our situation. I also encourage you to eventually read available tutorials out there about shaders and the OpenGL pipeline if you want to write your own.

A basic example

import QtQuick 2.0
ShaderEffect {
    width: 512; height: 128
    property color animatedColor
    SequentialAnimation on animatedColor {
        loops: Animation.Infinite
        ColorAnimation { from: "#0000ff"; to: "#00ffff"; duration: 500 }
        ColorAnimation { from: "#00ffff"; to: "#00ff00"; duration: 500 }
        ColorAnimation { from: "#00ff00"; to: "#00ffff"; duration: 500 }
        ColorAnimation { from: "#00ffff"; to: "#0000ff"; duration: 500 }

    blending: false
    fragmentShader: "
        varying mediump vec2 qt_TexCoord0;
        uniform lowp float qt_Opacity;
        uniform lowp vec4 animatedColor;

        void main() {
            // Set the RGBA channels of animatedColor as our fragment output
            gl_FragColor = animatedColor * qt_Opacity;

            // qt_TexCoord0 is (0, 0) at the top-left corner, (1, 1) at the
            // bottom-right, and interpolated for pixels in-between.
            if (qt_TexCoord0.x < 0.25) {
                // Set the green channel to 0.0, only for the left 25% of the item
                gl_FragColor.g = 0.0;

This animates an animatedColor property through a regular QML animation. Any change to that property, through an animation or not, will automatically trigger an update of the ShaderEffect. Our fragment shader code then directly sets that color in its gl_FragColor output, for all fragments. To show something slightly more evolved than a plain rectangle, we clear the green component of some fragments based on their x position within the rectangle, leaving only the blue component to be animated in that area.

Parallel processing and reduced shared states

One of the reasons that graphics hardware can offer so much rendering power is that it offers no way to share or accumulate states between individual fragment draws. Uniform values are shared between all triangles included in a GL draw call. Every per-fragment state first has to go through the vertex shader.

In the case of the ShaderEffect, this means that we are limited to qt_TexCoord0 to differentiate pixels. The drawing logic can only be based on that input using mathematic formulas or texture sampling of an Image or a ShaderEffectSource.

Using it for something useful

Even though this sounds like trying to render something on a graphing calculator, some can achieve incredibly good looking effects with those limited inputs. Have a look at Shadertoy to see what others are doing with equivalent APIs within WebGL.

Design and implementation

Knowing what we can do with it allows us to figure out ways of using this in GUIs to give smooth and responsive feedback to user interactions. Using Android's Material Design as a great example, let's try to implement a variant of their touch feedback visual effect.

This is how the implementation looks like. The rendering is more complicated but the concepts is essentially similar to the simple example above. The fragment shader will first set the fragment to the hard-coded backgroundColor, calculate if the current fragment is within our moving circle according to the normTouchPos and animated spread uniforms and finally apply the ShaderEffect's opacity through the built-in qt_Opacity uniform:

import QtQuick 2.2
ShaderEffect {
    id: shaderEffect
    width: 512; height: 128

    // Properties that will get bound to a uniform with the same name in the shader
    property color backgroundColor: "#10000000"
    property color spreadColor: "#20000000"
    property point normTouchPos
    property real widthToHeightRatio: height / width
    // Our animated uniform property
    property real spread: 0
    opacity: 0

    ParallelAnimation {
        id: touchStartAnimation
        UniformAnimator {
            uniform: "spread"; target: shaderEffect
            from: 0; to: 1
            duration: 1000; easing.type: Easing.InQuad
        OpacityAnimator {
            target: shaderEffect
            from: 0; to: 1
            duration: 50; easing.type: Easing.InQuad

    ParallelAnimation {
        id: touchEndAnimation
        UniformAnimator {
            uniform: "spread"; target: shaderEffect
            from: spread; to: 1
            duration: 1000; easing.type: Easing.OutQuad
        OpacityAnimator {
            target: shaderEffect
            from: 1; to: 0
            duration: 1000; easing.type: Easing.OutQuad

    fragmentShader: "
        varying mediump vec2 qt_TexCoord0;
        uniform lowp float qt_Opacity;
        uniform lowp vec4 backgroundColor;
        uniform lowp vec4 spreadColor;
        uniform mediump vec2 normTouchPos;
        uniform mediump float widthToHeightRatio;
        uniform mediump float spread;

        void main() {
            // Pin the touched position of the circle by moving the center as
            // the radius grows. Both left and right ends of the circle should
            // touch the item edges simultaneously.
            mediump float radius = (0.5 + abs(0.5 - normTouchPos.x)) * 1.0 * spread;
            mediump vec2 circleCenter =
                normTouchPos + (vec2(0.5) - normTouchPos) * radius * 2.0;

            // Calculate everything according to the x-axis assuming that
            // the overlay is horizontal or square. Keep the aspect for the
            // y-axis since we're dealing with 0..1 coordinates.
            mediump float circleX = (qt_TexCoord0.x - circleCenter.x);
            mediump float circleY = (qt_TexCoord0.y - circleCenter.y) * widthToHeightRatio;

            // Use step to apply the color only if x2*y2 < r2.
            lowp vec4 tapOverlay =
                spreadColor * step(circleX*circleX + circleY*circleY, radius*radius);
            gl_FragColor = (backgroundColor + tapOverlay) * qt_Opacity;

    function touchStart(x, y) {
        normTouchPos = Qt.point(x / width, y / height)
    function touchEnd() {

    // For this demo's purpose, in practice we'll use a MouseArea
    Timer { id: touchEndTimer; interval: 125; onTriggered: touchEnd() }
    Timer {
        running: true; repeat: true
        onTriggered: {
            touchStart(width*0.8, height*0.66)

Explicit animation control through start() and stop()

One particularity is that we are controlling Animations manually on input events instead of using states. This gives us more flexibility in order to stop animations immediately when changing states.

The mighty Animators

Some might have noticed the use of UniformAnimator and OpacityAnimator instead of a general NumberAnimation. The major difference between Animator and PropertyAnimation derived types is that animators won't report intermediate property values to QML, only once the animation is over.

Property bindings or long IO operations on the main thread won't be able to get in the way of the render thread to compute the next frame of the animation.

When using ShaderEffects, a UniformAnimator will provide the quickest rendering loop you can get. Once your declaratively prepared animation is initialized by the main thread and sent over to the QtQuick render thread to be processed, the render thread will take care of computing the next animation value in C++ and trigger an update of the scene, telling the GPU to use that new value through OpenGL.

Apart from the possibility of a few delayed animation frames caused by the thread synchronization, Animators will take the same input and behave just like other Animations.

Resource costs and performance

ShaderEffects are often depicted with their resource hungry brother, the ShaderEffectSource, but when a ShaderEffect is used alone to generate visual content like we're doing here, it has very little overhead. Unlike the Canvas, ShaderEffect instances also don't each own an expensive framebuffer object. It can be instantiated in higher quantities without having to worry about their cost. All instance of a QML Component having the same shader source string will use the same shader program. All instances sharing the same uniform values will usually be batched in the same draw call. Otherwise the cost of a ShaderEffect instance is the little memory used by its vertices and the processing that they require on the GPU. The complexity of the shader itself is the bottleneck that you might hit.

Selectively enable blending

Blending requires extra work from the GPU, prevents batching of overlapping items. It also means that the GPU needs to render the fragments of the Items hidden behind, which it could otherwise just ignore using depth testing.

It is enabled by default to make it work out of the box and it's up to you to disable it if you know that your shader will always output fully opaque colors. Note that a qt_Opacity < 1.0 will trigger blending automatically, regardless of this property. The simple example above disables it but our translucent touch feedback effect needs to leave it enabled.

Should I use it?

The ShaderEffect is simple and efficient, but in practice you might find that it's not always possible to do what you want with the default mesh and limited API available through QML.

Also note that a using ShaderEffects requires OpenGL. Mesa llvmpipe supports them, an OpenGL ES2 shader will ensure compatibility with ANGLE on Windows, but you will need fallback QML code if you want to deploy your application with the QtQuick 2D Renderer.

If you need that kind of performance you might already want to go a step further, subclass QQuickItem and use your shader program directly through the public scene graph API. It will involve writing more C++ boilerplate code, but in return you get direct access to parts of the OpenGL API. However, even with that goal in mind, the ShaderEffect will initially allow you to write a shader prototype in no time, giving you the possibility to reuse the shader if you need a more sophisticated wrapper later on.

Try it out

Those animated GIFs aren't anywhere near 60 FPS, so feel free to copy this code into a qml file (or clone this repository) and load it in qmlscene if you would like to experience it properly. Let us know what you think.

Woboq is a software company that specializes in development and consulting around Qt and C++. Hire us!

If you like this blog and want to read similar articles, consider subscribing via our RSS feed, by e-mail or follow us on twitter or add us on G+.

Submit on reddit Submit on reddit Tweet about it Share on Facebook Post on Google+

Article posted by Jocelyn Turcotte on 11 May 2015

Get notified when we post a new interesting article!

© 2016 Woboq GmbH