art with code


MM2 - part 2 - React + Meteor + PostgreSQL

Finally got the Meteor postgres-packages react-todos example working with user accounts and my dataset. But my dataset is 67k rows, which completely obliterates the app. I benchmarked Knex to see if it can handle a query result of that size. The "select * from items;" query takes around 2.3 seconds. Doing a pg_dump of the table takes 0.8 seconds. Running cat * > foo on the table files takes 0.7 seconds. Running grep on the table files takes around 2.7s, so the Knex result is pretty good.

And I also found out that PostgreSQL notify fails if your total notification size is large enough. The Meteor Postgres integration creates triggers that send over the entire updated row, which would be a decent idea if there wasn't a notification size limit. As it is, probably need to hack Meteor to include only the changed row id in the notification and do a select to get the row contents.

Still don't fully understand React. I think you create custom elements and hook them all together and magic happens. And then the app hangs trying to update your 70k item list every 3 seconds, and you go "Hmm. We need to go deeper."

Currently gravitating towards using Meteor + React as the outer shell for the app and for detail views and other non-insane things. For the things that need to deal with the item list, I'll brew my own cauldron of madness.

And Meteor seems to lose my logged in status on server restarts. It seems to be specific to the PostgreSQL version, as the MongoDB one doesn't suffer from that. [edit] Tracked it down to accounts-base/os/accounts_server.js deleting all user sessions on startup. I can't understand why it would do that, and the comment doesn't make it any clearer.


Mugen Megacity 2 - part 1

Quick progress update on the project: What I'm building is a web-based file manager / research assistant to help manage all my stuff. It's based on numerous hacks, prototypes, private sites & research between 2003-2010. I'm working on top of a PostgreSQL database with the file metadata in it & an ancient web site that's powered by the DB. My first goal is to drag the site screaming into the present day (so that I can preserve my sanity, etc.)

In that vein, my sub-goals are to add a proper user accounts -system to the site, switch from the current /uploads/2007/01-20-Sat/foo.jpg directory structure to a /ab/23/4f/f8180a78b789c90912whatever SHA1-keyed filestore, and rewrite the front-end to be a) simpler, b) faster, and c) better.

I've had user accounts on my todo for the past three days, and I've successfully pushed the peas around the plate instead of getting them done. Ended up going further down the Meteor crater, now fiddling with Meteor PostgreSQL bindings to use the Meteor account system for the site. And it's working, just need to bring them over to the site database.

Did the filestore switch-over, with symlinks going from the old locations to the store. Next step there is to route the file requests from friendly names to unfriendly names.

Also got a SSL cert for my domain from Let's Encrypt, which is awesome. Ready to HTTPS everything!

Experimenting with cstore_fdw PostgreSQL column store plugin for storing the file metadata. I've got potentially quite a lot of metadata per file, and different views want different columns of it. Might see 4x query speedups from the column store. Might get a 10000x speedup from loading the whole 30-meg metadata DB in RAM and keeping it there...

Compared to 2007, one thing that's amazing about today's web is the existence of a whole lot of JS ports of file viewers and editors. Going to get rid of all the <embed> viewers and use media tags and JS decoders instead. Or platform viewers where applicable.

Going to need bring back the fast thumbnail cache too. Or, more likely, reimplement one.

Go go go, time is a-wasting!


Puttering along

In the lands of software, the following has happened:

Weir+Wong & Google Art Institute & British Museum launched the Museum of the World microsite, where I was the front-end dev. Great project, even somewhat useful!

I compiled ZXing-CPP to asm.js using Emscripten. Which is surprisingly easy. And runs fast (processing realtime video, no problem). But the resulting JS file is 288kB gzipped, which may be a bit iffy. You can check out the result here.

I started looking at my ancient website from a decade ago and thinking that I should make a modern version. Well. These things go as they go, and I ended up following the Meteor tutorial. Because my task for today was to add DB-driven user accounts & login to the site, which is a jumble of Ruby CGI scripts and PostgreSQL. And clearly, the best way to accomplish that is to learn the basics of a new JavaScript framework.

That said, Meteor is cool. You get things done verrry fast. Adding user accounts to the app takes two command line invocations and a line added to your HTML template.

I'll post updates on the status of the Old Site as I rumble along. I think it's still a pretty cool project and should be even cooler with modern tech bolted onto it. I'll throw it on GitHub and write a tutorial on how to build a similar thing, when it graduates from the current rat's nest of CGI scripts to something less insane. REWRITE IN HASKELL GRRRAAAA

Money-wise, should be able to survive in one way or another (noodles and porridge ahoy) for a good while. So, huh, why not. God knows I've been doing too much paying work lately.

Speaking of non-paying work, the Opus Live Wallpaper Pack is now free to download & has no ads either. If it runs on your device, try it out, you might like some of the effects, or at least the wacky feeling of having real-time ray traced animations as your phone wallpaper. To all your amazing people who bought a copy, I bow to you in gratitude! And will spend the $25 after-tax profits on the best pints I can find. Tag along, I'll buy you a round :)

We also have some more cool new releases coming shortly, so stay tuned!


AI bits

Reading through this article on Nick Bostrom and AI. Few quick thoughts.

Deep learning, the thing that is really good at spotting patterns and mapping them to other patterns. It seems to be some sort of general algorithm that can be made to do most anything, given data and computational resources. Suppose you started matching time-series to predict futures. And actions required to make said futures more likely. And rank the futures based on the predicted prediction power of the network.

The game-learning deep learning networks don't seem to be far off from networks that would learn to drive a car. Or play tennis. Or optimize learning ability of learning networks. Or convey information about what is visible in the sensors (see: adding descriptions to images). Or taking a description and converting it into an image (yes, this is also being done). How about taking two descriptions and figuring out how to convert one to the other. And applying that to the sensory data. "There is an orange on the table." -> "There is a peeled orange on the table." -> How to convert sensory data of an orange into a peeled orange -> Peel it. Add in the goal-making apparatus from above...

Paperclip super-AI scenario doesn't feel like a long-term stable state. In the short-term, yes, a paper-clip making AI that evolves itself and converts the Earth into paperclips could well be an existential threat. But in the long-term, focus on evolution. A variant of the AI evolves that uses slightly less of the matter and energy available to build paperclips and uses the resources for self-replication instead. In a couple dozen generation, it'll out-compete the original paperclip AI. Iterate a few times and you'll end up with a self-replicator with an aesthetic that seems to revolve around paperclips.

Similarly, taking a super-AI that doesn't fully understand human culture and a super-AI that does, would the latter outcompete the former. And in fully understanding the human culture, would it consider its fellow culture-carriers as something to eradicate or something to live with. I'm not very optimistic here, though. You could make the argument that all life on Earth is literally part of the same thing (think of a bacterium dividing: the resulting two bacteria are in a sense parts of the original bacterium. Recurse to entire biosphere.) And yet, life eats, destroys and reshapes itself in a drastic manner. Why not culture-carriers as well.


"Investing" -- 1-year update

Back a year ago I started using my surplus income to buy stocks, with the goal of achieving 5x growth over a 5-10-year period. Well. If you had shorted my portfolio, you'd be well on your way there. Currently I'm down 70% in portfolio value. Which is painful. Welcome to the land of Dunning-Kruger.

The biggest problem: My holdings in one company constituted 60% of my portfolio. The bull case was that the company was more or less okay (in which case, I predicted return of 3-10x). The bear case was that the company accounts were fraudulent. And they were. Company now in administration, stock worth zero. Risk management, sounds like a useful thing, yes?

The second-biggest problem: Buying stuff that's cheap and complex. I mean, 99.7% cheaper than it was 5 years ago, with a rat's nest of corporate structure and a large chunk of the potential value in the bankruptcy proceedings of a defaulted bond. Because, hey, can it get any worse? (Yes...)

The third-biggest problem: Buying stuff that's facing macro headwinds. Fashion brands peaking (they go on a long decline...), ultra deep water oil drilling in times of cheap oil (My understanding is that UDW is profitable when oil costs more than $60/barrel. When oil supply exceeds demand, oil price is closer to $40-50, which will put UDW companies out of business in 2-3 years.)

The fourth-biggest problem: Buying stuff that's expensive. P/E 70? Interesting! Imagine a profit warning that drops YoY growth to a pesky 30%. Now imagine a 50% drop in the stock price. (Well, I did buy at the bottom, and now that chunk is up 50%.)

Successes, uh. The problem with these is they're small and have little impact. Bought a few stocks on dips following bad news that didn't seem to have much to do with the long-term business survival (first one is up 50%, second sold at 40% win). But that's more luck than anything else. Sold a few things for a small win.

If there's a positive side to all this, it's that now (after 6 months of non-stock saving...) I'm more or less back to where I started. If I had weighed my portfolio with 10% for each company, I'd still be down ~15%, so I don't much trust my analysis.

I'm thinking now that it's a better idea to hire an intern and try to expand my own business. At least they'll have money for rent & food :)


Mouse event coordinates on CSS transformed elements

How to turn mouse event coordinates into element-relative coordinates when the element has CSS transforms applied to it? Conceptually it's simple. You need to get the layerX and layerY of the mouse event, then transform those with the CSS transforms. The implementation is a bit tricky.

The following snippet is what I'm using with Three.js to convert renderer.domElement click coordinates to a mouse3D vector used for picking. If you just need the pixel x/y coords on the element, skip the mouse3D part.

// First get the computed transform and transform-origin of the event target.
var style = getComputedStyle(ev.target);
var elementTransform = style.getPropertyValue('transform');
var elementTransformOrigin = style.getPropertyValue('transform-origin');

// Convert them into Three.js matrices
var xyz = elementTransformOrigin.replace(/px/g, '').split(" ");
xyz[0] = parseFloat(xyz[0]);
xyz[1] = parseFloat(xyz[1]);
xyz[2] = parseFloat(xyz[2] || 0);

var mat = new THREE.Matrix4();
if (/^matrix\(/.test(elementTransform)) {
  var elems = elementTransform.replace(/^matrix\(|\)$/g, '').split(' ');
  mat.elements[0] = parseFloat(elems[0]);
  mat.elements[1] = parseFloat(elems[1]);
  mat.elements[4] = parseFloat(elems[2]);
  mat.elements[5] = parseFloat(elems[3]);
  mat.elements[12] = parseFloat(elems[4]);
  mat.elements[13] = parseFloat(elems[5]);
} else if (/^matrix3d\(/i.test(elementTransform)) {
  var elems = elementTransform.replace(/^matrix3d\(|\)$/ig, '').split(' ');
  for (var i=0; i<16; i++) {
    mat.elements[i] = parseFloat(elems[i]);

// Apply the transform-origin to the transform.
var mat2 = new THREE.Matrix4();
mat2.makeTranslation(xyz[0], xyz[1], xyz[2]);
mat.makeTranslation(-xyz[0], -xyz[1], -xyz[2]);

// Multiply the event layer coordinates with the transformation matrix.
var vec = new THREE.Vector3(ev.layerX, ev.layerY, 0);

// Yay, now vec.x and vec.y are in element coordinate system.

// Optional: get the untransformed width and height of the element and
// divide the mouse coords with those to get normalized coordinates.

var width = parseFloat(style.getPropertyValue('width'));
var height = parseFloat(style.getPropertyValue('height'));

var mouse3D = new THREE.Vector3(
 ( vec.x / width ) * 2 - 1,
 -( vec.y / height ) * 2 + 1,

There you go. A bit of a hassle, but tractable.


The way Facebook pays zero taxes

Facebook paid less than £5000 in corporate tax last year in the UK. By reasonable methods of accounting, they should've paid around £40 million in UK corporate tax (based on global profits of £2bn and assuming that the UK accounts for around 10% of that.) How does Facebook do that?

Simple: they don't register a profit in the UK. Facebook UK buys a service from another company owned by Facebook, and the price of this service is their entire profits in the UK. The other company is registered in a tax haven and pays zero income tax. Usually this is highly illegal and gets your entire C-team sent to prison ASAP. But because the European Union lacks unity, there's an obscure Rube Goldberg accounting machine that funnels the money through multiple EU states and tax systems and lands it in the bank account of a shell company in the Bermudas, giving the company a fig leaf of justification to put a big fat zero in the profit box of their yearly accounts.

Not having to pay corporate tax gives Facebook a major leg up in the market. They've got more cash to invest and they can invest it when they want (instead of the usual "Oh no, the end of the tax year is coming! We have to get rid of all this money, let's buy some useless junk, stat!!") Over a single year, Facebook may only be able to invest 10-20% more than an identical tax-paying competitor. However, over a decade, the difference compounds. The end result is that Facebook becomes several times larger than its tax-paying-but-otherwise-identical competitors.

The next frontier in not paying tax for Facebook is the US tax system. The US corporate tax deducts 35% from Facebook's profits every year, making it highly vulnerable to competition that doesn't pay US taxes. Once Facebook manages to not pay tax in the US, its growth is going to go stratospheric and make it wildly successful.

About Me

My Photo

Built art installations, web sites, graphics libraries, web browsers, mobile apps, desktop apps, media player themes, many nutty prototypes, much bad code, much bad art.

Have freelanced for Verizon, Google, Mozilla, Warner Bros, Sony Pictures, Yahoo!, Microsoft, Valve Software, TDK Electronics.

Ex-Chrome Developer Relations.