fhtr

art with code

2016-05-30

Brush stroke blending

Brush stroke blending is somewhat of a black art. Which is a shame, since Photoshop's been doing it for more than 15 years now.

Lemme try and explain.

Suppose you've got a pressure-sensitive drawing tablet where the pressure controls the opacity of the brush. If you do a low-pressure stroke, you'd like to have a flat surface of color with roughly the same alpha everywhere (say, alpha 0.5 for half pressure). If you use the usual source-over blend, each of the brush stroke segments would add to the alpha since it's src.a + (1-src.a) * dst.a. As the brush stroke intersects with itself, the alpha builds up all the way to 1. Which is not what you want if you're trying to paint an alpha 0.5 surface.

For a brush stroke with fixed alpha, this would be no problem, you could just vary the blend opacity of the stroke layer. But if you want to have varying alpha inside the stroke layer, you need a different blend. What I was using for hard-edged brushes (and what my Drawmore Touch prototype is using) is MAX blend for the alpha: max(src.a, dst.a). If an alpha 0.5 brush is stamped over a previous alpha 0.5 brush stroke pixel, the result is going to be alpha 0.5. This prevents the stroke layer from accumulating opacity above the brush opacity and makes it possible to do smooth surfaces using pressure-controlled opacity.

But it's broken for soft brushes. With soft brushes you'd like to paint a smooth surface with a soft edge. If you do alpha max, the brush intersections have cross-shaped blending artifacts and it's very difficult to do a smooth surface (you can try with the Drawmore thing: drag left on the brush 100% control to make it 0% hardness, then try to paint a smooth surface. No can do.) GIMP & Krita probably suffer from this too, Photoshop does something more magical.

What I've got in ShaderPaint is my latest attempt at solving this. It does brush stamping and alpha max mixed with source-over clamped to current stamp alpha.

void strokeBlend(vec4 src, float srcA, vec4 dst, out vec4 color)
{
    // Source-over blend for non-premultiplied alpha.
    color.a = src.a + (1.0-src.a)*dst.a;
    color.rgb = src.rgb*src.a + dst.rgb*dst.a*(1.0-src.a);
    color.rgb /= color.a;

    // Saturate color alpha to brush stamp max alpha.
    // For hard brushes this should be roughly the same as max(dst.a, src.a).
    // For soft brushes, the stroke accumulates alpha up to the brush stamp alpha,
    // which results in a flat stroke area with a smooth edge.
    if (color.a > srcA) {
        color.a = max(dst.a, srcA);
    }
}

The brush stamping shader is pretty simple. You pass it the last stamp position and the current mouse position (& use them to calculate the last stamp position for the next line segment). The shader then steps along the last stamp -> mouse position -vector with the brush spacing offset. For each of the brush stamp points, accumulate brush color with the in-stroke blending function.

strokeDirection = normalize(strokeDirection);
float stampSeparation = brushRadius * 0.5;
float stampCount = floor(strokeLength / stampSeparation + 0.5);

for (float i = 1.0; i < 200.0; i++) { // Max 200 stamps per segment.
    if (i > stampCount) { // Break once we're done with the stamps.
        break;
    }

    // Distance from circular brush.
    float d = length(fragCoord - (lastPoint + strokeDirection*i*stampSeparation)) / brushRadius;

    if (d < 1.0) { // The pixel is inside the brush stamp.
        vec4 src = currentColor;
        // Create a soft border for the brush.
        src.a *= smoothstep(1.0, hardness * max(0.1, 1.0 - (2.0 / (brushRadius))), d);
        strokeBlend(src, opacity, fragColor, fragColor); // Blend the brush stamp into the stroke layer.
    }
}

Dunno if there's a nicer solution, given the messiness of the blending function.

2016-01-21

Easy 3D on the web

Problem: can't do 3D graphics on the web. Solution: WebGL. New problem: no, I mean, I want to put this 3D model onto a web page. Solution: Sketchfab / Three.js / Babylon.js / ShaderToy. New new problem: I need to download libraries and code stuff or host the files on a SaaS or develop a third brain to model using modulo groups of signed distance fields moving along Lissajous curves.

Could easy 3D be part of the web platform? And how? With images, the solution was simple. The usual image file is a 2D rectangle of static pixels and all you really needed to figure out was how to layout, size and composite it with regards to the rest of the web page. When you step outside of simple static 2D images, all hell breaks loose.

Animated GIFs have playback timing and that's not so easy to figure out, and so animated GIFs are pretty broken. Videos need playback controls with volume and scrubbing, potential subtitle tracks, and a bunch of codecs both on the video and audio sides, so they were pretty broken as well. (Then YouTube started using HTML5 videos because mobiles don't do Flash. And magically the video issues were fixed~)

SVGs also need to handle input events and have morphed from "umm, put this in an embed and it'll maybe work" to their current (pretty awesome!) state where an SVG can be used as a static image, an animated image, embedded document and an inline document. Something for everyone!

Displaying a 3D model on a web page is a bit like mixing SVG with video elements. You'd like to have controls to turn the model around - it is 3D after all. And you'd like to have the model animate - again, it's 3D and 3D's good for animations. It'd be also nice to have the model be interactive. I mean, we were promised a 3D data matrix by a bunch of fiction writers in the 80s, and that's going to require some serious animated 3D model clicking action (to be fair, they also promised us a thermonuclear war, but let's not go there right now.)

So. 3D model. Web page. <3d src="castle_in_the_sky.3d" onclick="event.target.querySelector('#gate').classList.add('open')"> Right?

What file format do you standardize on? How do you load in textures and geometry? How do you control the camera? How do you do shaders for the materials? How do you handle user interaction? What are the limits of this 3D model format? How do you make popular 3D packages output this file format (plus animations, plus semantic model for interactivity)? How do you compress the thing and progressively stream it?

2016-01-19

ShaderPaint

I wrote a small painting program on ShaderToy, using the new multipass rendering feature. It's called ShaderPaint and it was a lot of fun to write.

The fun part about writing programs in ShaderToy is the programming model it imposes on you. Think of it as small pixel-size chunks of memory wired together in a directed graph. Each chunk of memory runs a small program that updates its contents. The chunks are wired together so that a chunk can read in the contents of chunks that it's interested in. The read gives you the contents on the last time step, so there's no concurrent crosstalk happening.

This is... rather different. Your usual programming model is all about a single big program, executing on a CPU, reading and writing to a massive pool of memory, modifying anything, anywhere, at any time. To go from that to ShaderToy's memory-centric approach is a big shift in perspective. Instead of thinking in terms of "The program is going to do this and then it's going to do that and then it's going to increment that counter over there.", you start to think like "If the program is running on this memory location, do this. If the program is running on that memory location, do that. If the program is running on the counter's memory location, increment the value." You go from having a single megaprogram to an army of small programs, each assigned to a memory location.

In the figure above, I've sketched the data flow of an output pixel. First, the UI inputs coming from the shader uniforms modify the UI state layer, which is read by the stroke layer to draw a brush stroke on it. The stroke layer is then composited with the draw layer when the brush stroke has ended. The final step is to composite the stroke layer and draw layer onto the output canvas, and draw the UI controls on it, based on the values on the UI state layer.

2015-12-09

Flat cameras

Well, this is really cool. It's a flat camera. A camera sensor with a special mask in front of it that makes it possible to computationally deconvolve the sensor data into an image. Imagine credit-card-slim mobiles and mobile cameras with a very large image sensor for shooting in low light. Or front-facing cameras embedded into the display.

On a similar vein, this Light L16 thing looks pretty nifty: jam 16 small cameras into a smartphone and do computer vision magic to treat the ensemble as a single big sensor.

2015-12-08

Display tech

I was playing with a Kindle the other day. It's got an e-ink display that's quite different from the usual mobile displays. For one, it works as a reflective display, much like the pages of a paper book. That means that you can read it without a backlight, unlike the LCD display you may have in your phone. The second big difference is that it's black & white only. Third, the screen can stay on and display an image without using any power. And finally, the refresh rate is very slow. So what's going on here? Why does it work that way?

E-ink displays have a tiny capsule for each pixel. This capsule is filled with black ink particles and white ink particles. The different colors have different charges. There's a charged plate underneath the capsule that can change the sign of its charge so as to attract one color and repel the other.

Let's say that you have black particles with a positive charge and white particles with a negative charge. When you set the charged plate to a positive charge, the white particles fly towards the charged plate at the bottom of the capsule and the black particles are pushed to the top of the capsule. As you're looking at the display from above, you see that the pixel turns black.

When you flip the charge, the black particles are drawn down to the bottom and the white particles are pushed to the top. The pixel now turns white.

Because the pixels use ink particles, the way they interact with light is much like you'd see with a regular sheet of paper. Light hits the ink at the top of the screen and bounces away towards your eyes. The black ink particles absorb more of the light than the white ones, and you see the resulting contrast. Nice and simple. But kinda slow, as the charging plate needs to flip its charge and the physical ink particles need to swap places for the pixel to change color.

By comparison, LCDs are quite a different beast. An LCD pixel is made out of three layers of material that polarize light. Polarized light is light where all the photons are vibrating in the same direction. In unpolarized light, the photons are vibrating in every which direction. To make polarized light, you either need a light source that produces polarized light or a filter that blocks photons that are vibrating in the wrong direction.

If you put two polarizing filters on top of each other, they do a bit of a trick. If the polarized filters are pointing to the same direction, they let light through. As you rotate them relative to each other, they let less and less light through, and once they're rotated to a 90 degree angle, they don't let any light through. What if you could change this rotation angle with electricity?

That's pretty much what an LCD pixel does. It's got a liquid crystal layer that rotates the polarization of light depending on how much electricity you put through it. To make the trick complete, it has two polarizing filters below and above the liquid crystal layer. The polarizing filters are at a 90 degree angle to each other, so when the liquid crystal layer is in the unrotating state, the pixel doesn't let any light through and appears dark. When the liquid crystal is set to rotate incoming light by 90 degrees, light passes through the pixel, making the pixel look bright.

Usually an LCD screen has a bright light behind it. The LCD in front blocks some of the light, creating the image that you see. Because the filters in the LCD are not perfect, some of the light hitting the dark pixels seeps through, making the darks brighter than ideal. Additionally, the black pixels are lit from behind at all times at the same intensity as the brightest white pixels, which makes LCDs relatively power-hungry. Because the liquid crystals can twist rapidly, LCDs have good refresh rates and work for displaying moving graphics.

OLEDs. OLEDs are tiny tiny lights, printed on the screen surface. The brightness of each of lamp is controllable by the amount of electricity fed to it. Because they're tiny individual lights, they are more power-efficient than LCDs as dark pixels don't use any energy at all. As each pixel is a tiny lamp, and black pixels are lamps that are turned off, the black levels of OLEDs are better than with LCDs. But because each pixel is a tiny lamp surrounded by circuitry, the total brightness achieved by an OLED display is worse than an LCD that can use a large powerful light behind the screen.

Ink, filters and lamps. That's what powers your mobile displays. Could you have something else? Maybe tiny waveguides that absorb a certain color of light - like the ones on butterfly wings - and have a controllable level of absorption? Something DLP-like where each display pixel is a mirror that either reflects incoming light back towards the viewing surface or away from it?

2015-12-04

HTTPS and HTTP2 on Apache2 with Let's Encrypt

This morning I figured I'd set up HTTPS and HTTP2 on my web server. It was pretty easy, too. And man, HTTP2 is fast, especially on silly sites like mine that have a large amount of small images on the page. Good riddance to sprites.

Here's how I set up my Ubuntu Apache2 web server for HTTPS and HTTP2:

For starters, let's get a HTTPS cert. You can get one for free using Let's Encrypt, a non-profit certificate authority from the US. It has an automagical command line tool that creates certs for you and registers them with the CA. It can even automate installation for Apache. Sadly, my Apache config didn't work with the automatic tool, so I had to do it manually. Which wasn't too bad either.

First, I shut down the Apache web server with sudo service apache2 stop. Then, I used the Let's Encrypt client to fetch the cert (this needs to be run on the server pointed to by the domain name):

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt
./letsencrypt-auto certonly --standalone -d MY.DOMAIN.NAME

If everything goes well, you should now have the certificate files in /etc/letsencrypt/live/MY.DOMAIN.NAME/. To get HTTPS running, I edited my Apache2 configuration to set up the SSL module and use it for my domain.

<VirtualHost *:443>
  ServerAlias MY.DOMAIN.NAME

  SSLEngine on
  SSLCertificateFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/cert.pem"
  SSLCertificateKeyFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/chain.pem"

...

Ok, HTTPS working. Let's do HTTP2 now. If you haven't yet, you need to upgrade your Apache to version 2.4.17 to get HTTP2 support. Older versions of Ubuntu don't have Apache 2.4.17, so you may need to add a custom PPA to your software sources with sudo add-apt-repository ppa:ondrej/apache2 or such.

After upgrading Apache, turn on the HTTP2 module with sudo a2enmod http2. Almost there! The last step is to turn on HTTP2 on our HTTPS virtual host by adding h2 to the Protocols directive. I also turned on the H2Direct directive, as the description said that it'll spare the server from upgrading a HTTP/1.1 connection if the client starts talking HTTP2.

<VirtualHost *:443>
  ServerAlias MY.DOMAIN.NAME

  SSLEngine on
  SSLCertificateFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/cert.pem"
  SSLCertificateKeyFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/MY.DOMAIN.NAME/chain.pem"

  Protocols h2 http/1.1
  H2Direct on

...

That's it! Now turn on Apache again with sudo service apache2 start and you should have HTTP2 running. You can check for it in Chrome DevTools by going to the Network pane, right-clicking on the columns header and turning on the Protocol column.

Thanks for reading! Hope this helps you getting your site up and running on HTTP2.

2015-10-28

Mouse event coordinates on CSS transformed elements

How to turn mouse event coordinates into element-relative coordinates when the element has CSS transforms applied to it? Conceptually it's simple. You need to get the layerX and layerY of the mouse event, then transform those with the CSS transforms. The implementation is a bit tricky.

The following snippet is what I'm using with Three.js to convert renderer.domElement click coordinates to a mouse3D vector used for picking. If you just need the pixel x/y coords on the element, skip the mouse3D part.

// First get the computed transform and transform-origin of the event target.
var style = getComputedStyle(ev.target);
var elementTransform = style.getPropertyValue('transform');
var elementTransformOrigin = style.getPropertyValue('transform-origin');

// Convert them into Three.js matrices
var xyz = elementTransformOrigin.replace(/px/g, '').split(" ");
xyz[0] = parseFloat(xyz[0]);
xyz[1] = parseFloat(xyz[1]);
xyz[2] = parseFloat(xyz[2] || 0);

var mat = new THREE.Matrix4();
mat.identity();
if (/^matrix\(/.test(elementTransform)) {
  var elems = elementTransform.replace(/^matrix\(|\)$/g, '').split(' ');
  mat.elements[0] = parseFloat(elems[0]);
  mat.elements[1] = parseFloat(elems[1]);
  mat.elements[4] = parseFloat(elems[2]);
  mat.elements[5] = parseFloat(elems[3]);
  mat.elements[12] = parseFloat(elems[4]);
  mat.elements[13] = parseFloat(elems[5]);
} else if (/^matrix3d\(/i.test(elementTransform)) {
  var elems = elementTransform.replace(/^matrix3d\(|\)$/ig, '').split(' ');
  for (var i=0; i<16; i++) {
    mat.elements[i] = parseFloat(elems[i]);
  }
}

// Apply the transform-origin to the transform.
var mat2 = new THREE.Matrix4();
mat2.makeTranslation(xyz[0], xyz[1], xyz[2]);
mat2.multiply(mat);
mat.makeTranslation(-xyz[0], -xyz[1], -xyz[2]);
mat2.multiply(mat);

// Multiply the event layer coordinates with the transformation matrix.
var vec = new THREE.Vector3(ev.layerX, ev.layerY, 0);
vec.applyMatrix4(mat2);

// Yay, now vec.x and vec.y are in element coordinate system.


// Optional: get the untransformed width and height of the element and
// divide the mouse coords with those to get normalized coordinates.

var width = parseFloat(style.getPropertyValue('width'));
var height = parseFloat(style.getPropertyValue('height'));

var mouse3D = new THREE.Vector3(
 ( vec.x / width ) * 2 - 1,
 -( vec.y / height ) * 2 + 1,
 0.5
);

There you go. A bit of a hassle, but tractable.

About Me

My photo

Built art installations, web sites, graphics libraries, web browsers, mobile apps, desktop apps, media player themes, many nutty prototypes, much bad code, much bad art.

Have freelanced for Verizon, Google, Mozilla, Warner Bros, Sony Pictures, Yahoo!, Microsoft, Valve Software, TDK Electronics.

Ex-Chrome Developer Relations.