Special Shadertoy features


(example here , utils here, many values there, ref shader here )

  • iMouse.xy: last mouse click or current mouse drag position.
    Attention: these are integers, while pixels are between integers.
  • iMouse.zw:
    • >0: starting drag position.


( example here, utils there , ref shader here )

  • special texture keyboard: size 256 x 2 ;   values in .x field.
    • (ascii+.5)/256, .25 : 1 if key #ascii is pressed otherwise 0
    • (ascii+.5)/256, .75 : 1/0 toggle for key #ascii


( many values there , ref shader: iTime, iTimeDelta  )

  • float iGlobalTime : seconds(+fracs) since the shader (re)started.
  • vec4 iDate: year-1, month-1, day, seconds(+fracs) since midnight.
  • int iFrame: frames since the shader (re)started.
  • float iTimeDelta: duration since the previous frame.
  • float iFrameRate: average FPS.
  • float iChannelTime[4] : current time in video or sound.


  • vec3 iResolution: for window and buffers.
  • vec3 iChannelResolution[4] : for texture input  of any kind.
  •  .x,y is width,height.
    • Don’t expect the ratio to be the same on all computers (or even to have width>height, e.g. on smartphones and tables). Indeed it can even change between icon, working view and fullscreen.
    • Don’t expect videos (and textures) to have the same ratio than your window: texture(fragCoord/iResolution.xy) wrap the full image, meaning possible distortions.
    • A reminder that a distortion-free scaling means scaling by a scalar, not a vector. e.g. fragCoord/iResolution.y , not .xy. We often like centered coordinates, with y in range [-1,1]:
      vec2 uv = (2.*fragCoord-iResolution.xy ) / iResolution.y ;
  • .z is the ratio of the pixel shapes. For now it seems to always be 1.

For buffers relying on persistence, they won’t keep valid if resolution changes while running: consider testing change, or a magic key to reinitialize. Similarly if you want to allow fullscreen, let 3″ at start for the user can switch ( if (iFrame<200)… )

Buffers A..D

Instead of being displayed (as for buffer “image”), the result is stored in the special texture of same name.

  • At each frame they are evaluated in the tabs order as you created them, then image.
  • For persistent or incremental effects, your buffer can read the same texture, then corresponding to its previous stage. It seems to be initialized to 0.
  • To use persistence, you might compute something only at frame 0 (if (iFrame==0)…).
    Still, if it relies on image textures, they need some time to be asynchronously loaded so consider something like if (iFrame<30) { init } else { fragColor = texture(sameBuffer); }
  • Note that these textures are floats, not bytes, so not bounded to [0,1]. In theory 16 bits (half), but it seems to be full ordinary 32 bits floats on various systems (if not all ?).
  • If you want to use the texture as an array, a reminder that fragCoords are mid-integers -> consider fragCoord-.5 to get the integer index, or (vec(i,j)+.5)/iResolution.xy to get the texture coordinate.

Provided textures

special textures :

  • Keyboard (see above), webcam (video), mic(sound), soundcloud (sound)
  • Buffers A,B,C,D (see above)
  • Noise random texture (see below), Bayer texture (see below)
  • Font texture (see below)

regular textures :

  • Image textures: various colors or grey images of various resolution are provided.
    • some wrap continuously and some not.
    • .a is always 1.
    • Color space is sRGB: no transform required for display, but if you want to process computation on them (physical rendering, image treatment) you should convert them to flat before (pow(img, 2.2) ) and back to sRGB after (pow (img, 1./2.2)). Note that this can be approximated by img^2 and sqrt(img).
  • Videos : time-evolving textures (sound is immediately played; the shader can’t access it).
    • They are rectangular: check iChannelResolution.
    • No wrapping: use fract if required.
    • No MIP-mapping.
  • Nyan cat: 256 x 32. Stores a 8 frame cartoon animation. .a is defined.
  • Cubemaps: encode environment as 6 maps. to be used with textureCube()
  • Audio textures: (see below)

Noise textures

6 textures of random uniform values are provided: two 2D low-res, two 2D high-res, two 3D textures; 3 grey and 3 rgba.

  • 2D Color texture: G and A channels are R and B translated by (37.,17.) , if no vflip. This allows to fake interpolated 3D noise texture. Ref example here.
  • 3D textures. Ref example here.
  • A reminder that the uniform random value is at pixel centers, that are between-integer coordinates. Other indexing can be useful for nice interpolation, but no longer uniform nor in the range [0,1].
  • These textures are in flat colorspace: ready to use for computation, but gamma/sRGB conversion required for faithful display.

Procedural noise (ex, with base noise=texture):
– ref shader here.
– see header comments for all the variants: classical vs simplex noise, value vs gradient noise, etc.

Bayer texture

tex15 is an ordered Bayer matrix :

  • It allows easy dithering and half-toning (just threshold it with the grey level), example here.
  • It also provides a permutation table in [0,63], example here.
  • This texture is in flat colorspace: ready to use for computation, but gamma/sRGB conversion required for faithful display.

Font texture

This special texture encodes 256  characters in tiles.

  • It contains 16×16 tiles of 64×64 pixels.
    • int c=#ascii   is found at vec2( c % 16, 15-c / 16 )/16.
    • a reminder that #ascii = 64+#letter and lowercase = 64+32+#letter
  • .x provides an anti-aliased mask of the characters.
  • .w gives the signed distance to the character border.
  • .yz gives the distance gradient.

More details here. Example & utils here and there.

Sound in (audio textures)

A provided music or a user-chosen music from SoundCloud can be used as texture.

  • Texture size is bufferSize x 2. (e.g.,  512 x 2).
  • index, .75 : music samples
    • It is a buffer refreshed along time, but the precise working seems very system dependent plus the synchronization with image frames is not guaranteed. So drawing the full soundwave or using a naive sound encoding of image won’t be faithful (or possibly only for you).
  • index, .25: FFT of the buffer
    • x / bufferSize = f / (iSampleRate /4.)   , example here.
      iSampleRate give the sampling rate, but it seems incorrectly initialized on many systems: if precision is important, try manually 44100 or 48000.

Ref shaders: music, soundCloud, microphone.

Sound out (sound buffer)

Instead of being displayed (as for buffer “image”), the result is stored in a audio buffer.

  • x,y channels correspond to left and right audio stereo. FragCoord corresponds to the time sample. ( iSampleRate give the sampling rate, but it seems incorrectly initialized on many systems) .
  • This buffer is evaluated once before the image shader run, then the music is played statically. So there is currently no way to interact between image and sound shaders.

More here.


If your system can display stereo, you can compute stereo shaders.
Just implement the additional adapted mainImage variant:
mainVR( fragColor, fragCoord, fragRayOrigin, fragRayDir )

More here.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s