Special Shadertoy features

Mouse

(example here , utils here, many values there, ref shader here )

  • iMouse.xy: last mouse click or current mouse drag position.
    Attention: these are integers, while pixels are between integers.
  • iMouse.zw:
    • >0: starting drag position.

Keyboard

( example here, utils here and there , updated ref shader here )

  • special texture keyboard: size 256 x 3 ;   values in .x field.
    • (ascii+.5)/256, 0.5/3. : 1 if key #ascii is currently pressed (keydown) otherwise 0
    • (ascii+.5)/256, 1.5/3. : 1 at the moment key is pressed (keypressed) otherwise 0
    • (ascii+.5)/256, 2.5/3. : 1/0 toggle for key #ascii

Note that only 2 states were first implemented; keypressed was added at an undetermined time. But compatibility should be safe if only non-zeroness or > .5 was tested rather than value 1.

Time

( many values there , ref shader: iTime, iTimeDelta  )

  • float iTime (or iGlobalTime ) : seconds(+fracs) since the shader (re)started.
  • vec4 iDate: year-1, month-1, day, seconds(+fracs) since midnight.
  • int iFrame: frames since the shader (re)started.
  • float iTimeDelta: duration since the previous frame.
  • float iFrameRate: average FPS.
  • float iChannelTime[4] : current time in video or sound.

Resolution

  • vec3 iResolution: for window and buffers.
  • vec3 iChannelResolution[4] : for texture input  of any kind.
  •  .x,y is width,height.
    • Don’t expect the ratio to be the same on all computers (or even to have width>height, e.g. on smartphones and tables). Indeed it can even change between icon, working view and fullscreen.
    • Don’t expect videos (and textures) to have the same ratio than your window: texture(fragCoord/iResolution.xy) wrap the full image, meaning possible distortions.
    • A reminder that a distortion-free scaling means scaling by a scalar, not a vector. e.g. fragCoord/iResolution.y , not .xy. We often like centered coordinates, with y in range [-1,1]:
      vec2 uv = (2.*fragCoord-iResolution.xy ) / iResolution.y ;
  • .z is the ratio of the pixel shapes. For now it seems to always be 1.

For buffers relying on persistence, they won’t keep valid if resolution changes while running: consider testing change, or a magic key to reinitialize. Similarly if you want to allow fullscreen, let 3″ at start for the user can switch ( if (iFrame<200)… )

Buffers A..D

Instead of being displayed (as for buffer “image”), the result is stored in the special texture of same name.

  • At each frame they are evaluated in the tabs order as you created them, then image.
  • For persistent or incremental effects, your buffer can read the same texture, then corresponding to its previous stage. It seems to be initialized to 0.
  • To use persistence, you might compute something only at frame 0 (if (iFrame==0)…).
    Still, if it relies on image textures, they need some time to be asynchronously loaded so consider something like if (iFrame<30) { init } else { fragColor = texture(sameBuffer); }
  • Note that these textures are floats, not bytes, so not bounded to [0,1]. In theory 16 bits (half), but it seems to be full ordinary 32 bits floats on various systems (if not all ?).
  • If you want to use the texture as an array, a reminder that fragCoords are mid-integers -> consider fragCoord-.5 to get the integer index, or (vec(i,j)+.5)/iResolution.xy to get the texture coordinate.

Common

This is a special convenience tab, which content is included in all the other tabs.
Useful to set up only once all your global consts, macros, and utility functions.

Attention: to avoid inter-buffers incoherence, most uniforms (texture Channels, iResolution, etc) are not directly available here (or pass them as function parameters). But in macros, of course.

Provided textures

Note that most of them (comprising volumes, Buffers and video) can be toggled between nearest / linear / MIPmap interpolation, and clamp / repeat warp flag.

Attention: black&white textures are now encoded as red texture. Use vec4(texture()) or texture().rrrr to get the corresponding RGBA. Note that you can use textureSize() to detected 1-channel textures.
Attention: textures often need a few frame to load, which might spoil custom initialization if done at iFrame==0. -> iChannelResolution[] stick to 0 if the texture is not yet ready (see test). Same for higher LODs in textureSize().
-> Safe texture-load-dependent initialization.

special textures :

  • Keyboard (see above), webcam (video), mic(sound), soundcloud (sound)
  • Buffers A,B,C,D (see above)
  • Noise random texture (see below), Bayer texture (see below)
  • Font texture (see below)

regular textures :

  • Image textures: various colors or grey images of various resolution are provided.
    • some wrap continuously and some not.
    • .a is always 1.
    • Color space is sRGB: no transform required for display, but if you want to process computation on them (physical rendering, image treatment) you should convert them to flat before (pow(img, 2.2) ) and back to sRGB after (pow (img, 1./2.2)). Note that this can be approximated by img^2 and sqrt(img).
  • Videos : time-evolving textures (sound is immediately played; the shader can’t access it).
    • They are rectangular: check iChannelResolution.
    • No wrapping: use fract if required.
    • No MIP-mapping.
  • Nyan cat: 256 x 32. Stores a 8 frame cartoon animation. .a is defined.
  • Cubemaps: encode environment as 6 maps. to be used with textureCube()
  • Audio textures: (see below)

Noise textures

6 textures of random uniform values are provided: two 2D low-res, two 2D high-res, two 3D textures; 3 grey and 3 rgba.

  • 2D Color texture: G and A channels are R and B translated by (37.,17.) , if no vflip. This allows to fake interpolated 3D noise texture. Ref example here.
  • 3D textures. Ref example here.
  • A reminder that the uniform random value is at pixel centers, that are between-integer coordinates. Other uv values can be useful for nice interpolation, but are no longer uniform and lesser std-dev).
  • These textures are in flat colorspace: ready to use for computation, but gamma/sRGB conversion required for faithful display.

Procedural noise (ex, with base noise=texture):
– ref shader here.
– see header comments for all the variants: classical vs simplex noise, value vs gradient noise, etc.

Bayer texture

tex15 is an ordered Bayer matrix :

  • It allows easy dithering and half-toning (just threshold it with the grey level), example here.
  • It also provides a permutation table in [0,63], example here.
  • This texture is in flat colorspace: ready to use for computation, but gamma/sRGB conversion required for faithful display.

Font texture

This special texture encodes 256  characters in tiles.

  • It contains 16×16 tiles of 64×64 pixels.
    • int c=#ascii   is found at vec2( c % 16, 15-c / 16 )/16.
    • a reminder that #ascii = 64+#letter and lowercase = 64+32+#letter
  • .x provides an anti-aliased mask of the characters.
  • .w gives the signed distance to the character border.
  • .yz gives the distance gradient.

More details here. Example & utils here and here and here and there.

Sound in (audio textures)

A provided music or a user-chosen music from SoundCloud can be used as texture.

  • Texture size is bufferSize x 2. (e.g.,  512 x 2).
  • index, .75 : music samples
    • It is a buffer refreshed along time, but the precise working seems very system dependent plus the synchronization with image frames is not guaranteed. So drawing the full soundwave or using a naive sound encoding of image won’t be faithful (or possibly only for you).
  • index, .25: FFT of the buffer
    • x / bufferSize = f / (iSampleRate /4.)   , example here.
      iSampleRate give the sampling rate, but it seems incorrectly initialized on many systems: if precision is important, try manually 44100 or 48000.

Ref shaders: music, soundCloud, microphone.

Sound out (sound buffer)

Instead of being displayed (as for buffer “image”), the result is stored in a audio buffer.

  • x,y channels correspond to left and right audio stereo. FragCoord corresponds to the time sample. ( iSampleRate give the sampling rate, but it seems incorrectly initialized on many systems) .
  • This buffer is evaluated once before the image shader run, then the music is played statically. So there is currently no way to interact between image and sound shaders.

More here.

VR

If your system can display stereo, you can compute stereo shaders.
Just implement the additional adapted mainImage variant:
mainVR( fragColor, fragCoord, fragRayOrigin, fragRayDir )

More here.

Advertisements

13 thoughts on “Special Shadertoy features

  1. Question about sound in (audio textures). “index, .75 : music samples
    It is a buffer refreshed along time”

    I infer that the return value of texture(iChannelx, vec2(x, .75)) gives *amplitude* of the sound wave at a certain point (maybe at the current time).

    So what does the index value (x) do? Is it a time value, so the texture lookup is not restricted to the present moment)? Or does it make no difference at all?

    Like

    • A reminder that sound “pixels” are every 1/44100 th of second, while shaders are called at 1/60 th framerate at best 😉
      So what you have is a buffer containing a window of time values of sound. These windows slightly overlap along frame, following mysterious system-dependent rules.

      Like

      • Seems like we commented at the same time … I think I understand now though. So basically yes, the “horizontal” slice of the sound texture at (x, .75) represents the amplitude of the sound wave samples varying across a certain small time interval, approximately the time since the last frame.

        In your example where the sample rate is 44,100 samples/sec, and the frame rate is 60 frames/sec, the texture during a given frame will contain about 735 different amplitude values as x takes on 0, 1/735, 2/735, etc.

        Like

    • Is it possible that the index x in vec2(x, .75) is a time index ranging from 0 to 1 over some small time interval, such as the timeDelta since the last frame?

      Like

  2. Nope. the frame rate is always varying, while the buffer is fix size. so your data are somewhere in that fix buffer. And if you are lucky, they are not truncated.
    Several people have tried to be smart and use soundcloud to transmit images: it does not work (at least won’t on others OS+browser+whoknows config). We needed to use fax-like frequency encoding.

    Like

  3. Fabrice, in regard to your “Other indexing can be useful for nice interpolation … nor in the range [0,1]”, can you clarify/confirm? This is not as I would expect.

    Like

  4. > This buffer is evaluated once before the image shader run, then the music is played statically. So there is currently no way to interact between image and sound shaders.

    It disappoints me that I can’t figure out a way to sample the “sound shader”-generated sound from a regular image shader, other than looping the sound back through the microphone. Since it’s generated AOT, it seems technically possible to support that.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s