art with code

2010-06-24

Funny multiplication tricks

This kinda struck me yesterday: multiplication is additive pattern splatting (I just came up with that term.) I've been seeing binary tricks that have used multiplication as copy operation but never really understood it before now.

With sparse binary patterns it's quite apparent. For example, let's multiply 00100010001001001 by 111. Sounds hard? It's actually easy, you replace each 001 in the long pattern with the short pattern and end up with 11101110111111111. In the same way you can copy a byte to different places in a word. This may sometimes be faster and/or less bother than bit-shifting:

int64 fillInt64WithByte_shift(int8 b) {
return b << 56 | b << 48 | b << 40 | b << 32 | b << 24 | b << 16 | b << 8 | b;
}

vs.

int64 fillInt64WithByte_mul(int8 b) {
return 0x0101010101010101 * b;
}


It also works with decimal numbers: 10101 x 99 is 999999. If you have a number that's not a 1 in the target pattern, you need to multiply your source pattern with that: 001 002 003 0005 x 203 = 203 406 609 1015.

However, the simple copy splatting only works when you don't have overlaps. With overlaps you need to add the new splat to the target pattern. Let's consider non-carry addition first using 1001301001 x 211 as the example. First you fill in the non-overlapping parts (the 1001001001 pattern) and end up with 211 211 211 211. To integrate the 3 into the result you multiply it by 211 and add the result to the target pattern.

1001301001 x 211 =
211211211211
+ 633
------------
211274511211


When you have carries, it gets a good deal more tedious. Consider binary 1111 x 1111. First we splat 1111 on the first 1 in the target pattern: 01111000, then on the second 1: 00111100, etc. and finally get 01111000 + 00111100 + 00011110 + 00001111 = 11100001. Drudgery.

We could move it to base-15 and get 10 x 10 = 100, then convert that back to base-2... for no benefit whatsoever. 1111^2 x 1 = 1111 x 1111. Or do a trick with 10000 x 1111 - 1111 = 11110000 - 1111 = 11100001.

Or convert to base-16 for F x F = E1, then convert that to base-2 for 1110 0001. Note the similarity between F x F = E1, 9 x 9 = 81, o7 x o7 = o61, b1 x b1 = b01. This can be explained by

For base N, N = 10 - 1:
N x N = N x (10 - 1)
= N x 10 - N
= N x 10 - (10 - 1)
= (N x 10 - 10) + 1
= (N-1) x 10 + 1


Back to splat-additive multiplication, it seems to work for positive integers. Does it work for negative ones? Well, yes, 1001001 * -101 = -101000000 + -101000 + -101 = -101101101. And there's that sign-trick to turn multiplication of negative integers to multiplication of positive integers: -A x B = -(A x B), -A x -B = -(-(AxB)) = AxB.

Does it work for matrices... not that I can tell. Matrix multiplication is not splatting anyhow. What would a splat-additive matrix multiplication operation look like? I don't know, let's start exploring by trying to extend the scalar splat-additive multiplication to vectors. If we treat vectors as bucket-based integers (that is, no carries, base-∞), integer-like splat-additive vector multiplication would be quite easy. For example, [1 0 0 2] [x] [3 6] = [3 6 0 6 12], but what can that be used for? Is there some sensible conversion to scalar from it? Can it be extended to matrices?

As an aside, scalar vector multiplication can be used to make integer multiplication a bit simpler. Using 32 x 1935 as an example: 32 x [1 9 3 5] = [32 288 96 160]. We can convert that to an integer value by doing a base conversion for each element and adding them together: 32*1000 + 288*100 + 96*10 + 160*1 = 61920 = 32 x 1935.

Another way to do splat multiplication with vectors would be to generate a matrix from vector multiplication, an approach not entirely without appeal. [A B] x [C D] would result in [ [AC AD] [BC BD] ]. It can express number multiplication as a special case: AxB would be |A| x |B| = AxB (I'm using |A| to say that |A| is a single-element vector of zero dimension). If you have dimensional numbers, it preserves the dimensionality of the result, [Length] x [Width] = [[Area]].

Can the vector-splatting multiplication be extended to vectors of vectors? Seems so:

[ [1 2] [3] ] x [ [5 5] [6 2] ] would be, uh, ...
[ [ [1 2]x[5 5] [1 2]x[6 2] ] [ [3]x[5 5] [3]x[6 2] ] ] =
[ [ [[5 5] [10 10]] [[6 2] [12 4]] ] [ [[15] [15]] [[18] [6]] ] ]

And to continue with the dimensional numbers,

[Height] x [[Area]] = [[[ Height x Area ]]]
[[Area]] x [[Area]] = [[[[ Hypercube! ]]]]


What properties would this vector-splat multiplication have?

[ A B ] x [ C D ] = [ [AC AD] [BC BD] ]
[ C D ] x [ A B ] = [ [CA CB] [DA DB] ]
Not commutative (unless the equality is sum-equality: A = B if sum(A) = sum(B).)

([ A B ] x [ C D ]) x [ E F ] =
[ [AC AD] [BC BD] ] x [ E F ] =
[ [[ACE ACF] [ADE ADF]] [[BCE BCF] [BDE BDF]] ]

[ A B ] x ([ C D ] x [ E F ]) =
[ A B ] x [ [CE CF] [DE DF] ] =
[ [[ACE ACF] [ADE ADF]] [[BCE BCF] [BDE BDF]] ]
Seems to be associative.

The identity element would be a dimensionless 1:
1 x [ A B ] = [ 1xA 1xB ] = [ A B ]

Note that a dimensional one won't do:
[1] x [ A B ] = [ [A] [B] ]

To do the inverse element, we need vectors with negative dimensions:
I x [ A B ] = 1 requires I to reduce the dimension of [ A B ]

Additionally, the inverse element requires sum-equality (i.e. [A B] = [B A])
[A B] = [C] iff A + B = C
|0.5 0.5| = 0.5 + 0.5 = 1, hello confusing notation for dimensionless vectors
[[A [B] C]] = |[[[B]]] [[A C]]|, hello polynomial notation

I x [ A B ] = 1
[1/2A 1/2B]-1 x [ A B ] = |A/2A B/2B| = |1/2 1/2| = 1

So the inverse function would be
-- get inverse vector for which V x inv(V) = 1
inv(V) = (map (\i -> 1/(V_len*i)) V)-V_dim


...ghhh. So, uh, I guess with sum-equality it could maybe form a field with some sort of vector addition. The real question is whether there is anything where this multidimensional splat-multiplication works better than existing methods. It looks a lot like polynomials. Maybe the vector addition could be..

|A B| = (A + B)0
[A B] = (A + B)1
|[[A B] C] D| = |[[A B]] [C] D| = (A+B)2 + C1 + D0
= [[A B]] + [C] + |D|

That's kinda interesting, the (A+B)2 isn't (A+B)2 but rather means that (A+B) is a two-dimensional number.

Oh well, back to the grind.

2010-06-06

WebGL stereo rendering demo


Cross your eyes and prepare for eyestrain, here comes the WebGL stereogram. Drag with LMB to rotate, drag with MMB to pan, wheel zooms, shift-wheel scales model.

The demo uses glScissor and glViewport to restrict drawing to one side of the viewport, then draws the scene to the left and right sides with the camera offset slightly. The camera offset is along the right-vector of the camera and you can get it by finding the normal of the camera up and look vectors. Basically right = cross((lookAt - position), up), then leftEye = position - (sep/2 * right); rightEye = position + (sep/2 * right). Then you render to left viewport from rightEye and to the right viewport from leftEye.

2010-06-02

Loading WebGL models from tarballs



Plugged the TarGZ loader together with my old OBJ and binary model loaders and emerged with a couple demos that load a model and a texture from a gzipped tarball and render it using WebGL.

Verdict: use uncompressed tarballs and HTTP gzip compression whenever possible. Doing the gzip with JS doesn't cache the uncompressed data transparently (you'd have to do an offline storage hack), and it's kinda slow. Parsing tar vs. plain files is a bit different tradeoff. With tar you only have to manage a single file. With plain files, you need to manage the whole directory hierarchy they live in. Which may not be something that you can or want to do with your file serving host.

One solution is uncompressed tarballs and a .htaccess with deflate filter set for them, though it seems to screw up caching somehow (thanks for the .htaccess tip go to Evgeny Demidov)

AddOutputFilterByType DEFLATE text/html text/plain application/x-tar application/javascript

<Files *.bin>
SetOutputFilter DEFLATE
</Files>

which also helps for JSON models and OBJs and other textual things. The main problem with the tarballs is that loading the images takes a bit longer (read: hundredths / tenths of a second) due to having to dataURLize them. Well. If you pre-dataURL the images, that won't be a problem.

The main benefit of the tarballs is that everything's in a single file and you don't need a chain loader to initialize models in a single step. Instead of first showing an untextured model, then sometime later the model with a diffuse texture, then later the model with a diffuse texture and a normal map, etc., with single-step initialization you get the full textured model once all the data is loaded. Plus, if you have a standard model format of some sort with manifests and whatnot, you can swap models by changing a single URL. You can do that with a plain model.xml or somesuch as well, but then you need to manage multiple files. It's harder to reuse resources inside tarballs than resources as separate files, though.

Also updated the big whale model viewer to be mouse-controllable (drag with LMB to rotate, drag with MMB to pan, scroll wheel to zoom, shift-scroll wheel to scale).

Blog Archive