art with code


Three notebook system - part 1 / 3

In the past few years, I've started to gravitate towards a work scheduling / external memory system to keep my projects rolling smoothly. What I've got going now is a three notebook system, which, as you might guess, revolves around three notebooks. Let me give you a brief overview of the system.

I use the three notebooks to record what I plan to do, what I should be doing next and what I have done. The notebooks operate on different timescales. The planning notebook is concerned about quarterly plans. It moves along at a very leisurely pace. The daily notebook sets goals for the day and the week. It's got roughly a week's worth of goals on one spread. The third notebook is the work log. It records what I've been doing, how long it took to do it, and what I learned in the process.

When I start my day, I take a quick look at the planning notebook to remind myself of my medium-to-long-term goals. Then I write the first few tidbits to the work log: simple stuff like "6:30 Woke up, breakfast, shower. 7:30 Start of day. 7:45 Made Opus SA icons in Photoshop. 8:00 -> Write daily goals [x] ...". At the start of the day, I write my goals for the day into the daily notebook. At the start of the week, I also set some higher-level goals for the week.

The level of detail in each of the notebooks is quite different. The planning notebook deals in high-level plans and their measurable results. In it, I write strategic goals with planned quarterly-level tactics on achieving those. The daily notebook has weekly goals that support the quarterly tactics and daily goals that deal with the minutiae of scheduling and achieving the weekly goals. The work log acts more as a short-term memory extension. I use it to plan my next action during the day, keep myself focused and maintain a sense of progress.

With the three notebooks I've got guidance on where I'm headed in the future, what I'm planning to do this week, what I'm going to do today and how that's working out so far. The big idea here is to try and align my short-term actions to my long-term objectives.

As I progress through time, I tweak the goals as the situation changes. Tweaking the goals in turn tweaks the daily goal planning. The goals for the previous days are not the goals for today. This flexibility gives me the ability to respond to changes rapidly without losing sight of the long-term goals.

In conclusion, the three notebooks keep me focused on what I'm doing now and how that's going to help me in the future. The notebooks act as goal-oriented external memories at different timescales. By keeping track of my use of time, they also give me a better sense for how long it takes to do things.

I'll take a closer look at each of the notebooks in part 2 and go through the practical experience in part 3. Thanks for reading! What kind of planning systems do you use to get your work done?


Opus Live Wallpaper

I ported a bunch of the fhtr.org effects over to Android to use as a live wallpaper. The wallpaper is available on the Google Play Store as either a $0.99 version or an ad-funded version (hey, there's also a small fireworks wallpaper with a web preview). My total income from these over the last month has been a bit less than $5, so, well, it's been more educational than anything else :D

The effects look like the pictures below or the video above. I like having them as wallpapers, gives a nice feel to my phone. You probably need something like an Adreno 330 to run them well, so phones like LG G2 and above.

Porting the shaders over from WebGL was easy for the most part. There were a whole lot of performance tweaks that I had to apply. One of the most important ones was rendering into an FBO at a lower resolution and then upscaling the FBO to screen res. With HiDPI displays, the difference between full res and scaled up is not very drastic.

Another technique I used was rendering at a lower framerate. The prob there is that you want to keep the main UI interactive while you're rendering. If you do it naively with "render frame -> wait 2 frames -> render frame -> ...", and the "render frame" bit takes longer than 16ms, you'll cause the main UI to skip frames. Which is bad. The solution I arrived to was to render a part of the wallpaper frame during the main UI frames, then display the finished frame in the end: "render 1/3 of frame -> render 2/3 of frame -> render 3/3 of frame & display it -> ...". This way I can keep the main UI interactive while rendering the wallpaper frame.

But we're not out of the woods yet! Some wallpapers have a central part that is very heavy to render and edges that are cheap. If we split the frame into e.g. 3 slices, one of the slices would take the majority of the time, and potentially cause the main UI to skip frames. So we need to find a split that distributes the work equally among the UI frames. To do the partial frame rendering, I slice the frame into 8 px vertical stripes. On each UI frame, I render the wallpaper stripes belonging to that frame (frameNumber % numberOfUIFramesPerWallpaperFrame == 0). As the UI frame number increases, the renderer moves across the wallpaper frame, drawing it in an interleaved fashion. This gives me a fairly even split of work per frame and gets me a consistent frame rate for the UI.

If you have a try at the wallpapers, lemme know how it goes. Would be super interested in hearing your thoughts.


In which I make videos

Edited a video with After Effects.

Proceeded to follow a motion graphics tutorial.

And then I started making a website. Country life!

What the hey

Recently: Writing a web app backed by a bunch of CGI programs written in Haskell, running on Mighttpd2, a web server written in Haskell. And a whole lot of JavaScript to make it run. It's using PostgreSQL to store its stuff.

What the app does: It's a website with a shop component. You can edit the pages by writing HTML into text fields. Yeah. Let's shake our collective heads together.

How's it going: I like writing Haskell though I don't like the Cabal dephell. I probably have to use a different web server as the Mighttpd2 version in Ubuntu 14.04 doesn't do HTTPS and I haven't had luck installing a HTTPS-supporting version from Cabal. Eh. JavaScript is as usual, difficult to keep sane. CSS helps keep the JS less gnarly. I kinda like the little experiments in the page structure. First I had all the site content inlined into a single page, then I moved them out to HTML files that the JS pulls in (and inlined the front page using a build script). Then I moved the HTML files into a database and now I'm fetching all of them in a JSON array at page load, so it's sort of going back to the inline-everything-thing.

How about that shop: The shop part is building shopping carts for PayPal checkout. Which does work, though a completed payment should also ping the server to update the inventory.

How it does what it does: The client does all the interesting bits. The server just serves JSON from the database. The admin client edits the JSON objects it receives from the server and sends them back to save them. The cool bit is that Aeson, a Haskell JSON library, typechecks the whole thing, decoding and encoding the JSON with a minimum amount of code on my part.

-- ListPages.hs
-- GET -> JSON

-- Returns a JSON array of pages.

import Network.CGI
import Data.Aeson
import PageDatabase
import SiteDatabase

instance ToJSON Page

main = runCGI $ handleErrors cgiMain

cgiMain = do
  setHeader "Content-Type" "application/json"
  setHeader "Access-Control-Allow-Origin" "*" -- set CORS header to allow calls from anywhere
  liftIO listJSON >>= outputFPS

listJSON = fmap encode $ withDB listPublishedPages

-- listJSON :: IO Lazy.ByteString
-- listPublishedPages :: Connection -> [Page] -- fetches the list of pages from the database where published = true
-- Data.Aeson.encode turns [Page] into a ByteString of JSON.
-- See the "instance ToJSON Page" above?
-- That is all I need to do to get type-safe JSON encode and decode.
-- As long as I use "deriving (Generic)" in my type definition, that is.

Here's the Page data type from the PageDatabase module:

data Page = Page {
  page_id :: Int64,
  bullet :: String,
  body :: String,
  published :: Bool
}  deriving (Show,Generic)

This is how I deal with the JSON that the client sends:

-- EditPage.hs
-- POST (Page) -> JSON true | false

import Control.Monad
import Network.CGI

import SiteDatabase
import PageDatabase
import SessionDatabase
import GHC.Int
import Data.Aeson

instance FromJSON Page -- Make Data.Aeson.decode parse JSON into Page objects.

main = runCGI $ handleErrors cgiMain

cgiMain = do
  body <- getInputFPS "body"
  authToken <- getAuthToken -- helper that deals with session cookies, CSRF tokens and login user/pass params
  msg <- liftIO (maybe noPage (editPageDB authToken) (body >>= decode)) -- decode turns the body JSON into a Page object
  setHeader "Content-Type" "application/json"
  output msg

noPage = return "false" -- No body param received or the body param failed to typecheck in decode.

editPageDB authToken page = do
  rv <- withDB (\conn -> authenticate conn authToken (editPage conn page)) :: IO Int64 -- authenticate runs editPage if the authToken is OK
  case rv of
    1 -> return "true"  -- Edit successful
    _ -> return "false" -- Page not found or auth failed

Isn't CGI kinda slow: Dunno. Testing on an EC2 micro instance, a HTTP request for a JSON array of all the ten pages in the DB takes about 10 ms.

Couldn't you just use Weebly / Squarespace / Wix / Whatever: Hey! Watch it! No! Of course not!


Social things

Public service announcement: I shut down my Google+, Twitter and Facebook accounts. I wasn't using them correctly and wasted too much time on them. So if you saw me suddenly disappear, no worries.


Spinners on the membrane

I wonder if you could do image loading spinners purely in CSS. The spinner part you can do by using an animated SVG spinner as the background-image and inlining that as a data URL. Detecting image load state, now that's the problem. I don't think they have any attribute that flicks to true when the image is loaded. There's img.complete on the DOM side, but I don't think that's exposed to CSS. And it's probably tough to style a parent element based on child states. And <img> elements are fiddly beasts to begin with.

If you had a :loading (or some sort of extended :unresolved) pseudo-class that bubbled up the tree, that might do it. You could do something like .spinner-overlay { background: white, url(spinner.svg)50% 50%; width: 100%; height: 100%; opacity: 0; } *:loading > .spinner-overlay { opacity: 1; transition: 0.5s; }. Now when your image is loading, it'd have a spinner on top of it and it'd go away when it stopped loading. And since :loading doesn't care about what is loading, it'd also work for background-images, video, audio, iframes, web components, whatever.


Sony A7, two months in

I bought a Sony A7 and a couple of lenses about two months ago to replace the Canon 5D Mk II that I had before. Here are my experiences with the A7 after two months of shooting. I've taken around 1500 shots on the A7, so these are still beginner stage thoughts.

If you're not familiar with it, the Sony A7 is a full-frame mirrorless camera at a reasonable (for full-frame) price point. It's small, it's light, and it's full-frame. The image quality is amazing. You can also find Sony E-mount adapters to fit pretty much any lens out there. If you have a cupboard full of vintage glass or want to fiddle with old Leica lenses or suchlike, the A7 lets you shoot those at full-frame with all the digital goodness of a modern camera. You probably end up adjusting the aperture and focus manually on adapted lenses though. There are AF-capable adapters for Sony A and Canon EF lenses, but the rest are manual focus only.

It's not too much of a problem though, since manual focusing on the A7 is nice. You can easily zoom in the viewfinder to find the right focus point and the focus peaking feature makes zone focusing workable. It's not perfect ergonomically, as your MF workflow tends to go "press zoom button twice to zoom in, focus, press again for even closer zoom, focus if need be, press zoom button again to zoom out, frame." What I'd like to have is "press zoom button, focus, press zoom button, frame." Or zoom in just one area of the viewfinder, leave frame border zoomed out so that you can frame the picture while zoomed in.

Autofocus is fiddly. Taking portraits with a shallow depth-of-field, the one-point focus mode consistently focuses on the high-contrast outline of the face, which tends to leave the near eye blurry. Switching to the small movable focus point, you can nail focus on the near eye but now the AF starts hunting more. And I couldn't find a setting to disable hunting when the camera can't find a focus point (like on Canon DSLRs), so AF workflow is a bit slower than I'd like. Moving the focus point around is also kinda slow: you press a button to put the camera into focus selection mode, then slowly d-pad the focus point to where you want it.

The camera has a pre-focus feature that autofocuses the lens when you point the camera at something. I guess the idea is to have slightly faster shooting experience. I turned the pre-focus off to save battery.

The camera's sleep mode is defective. Waking up from sleep takes several seconds. Turning the camera off and on again seems to get me into ready-to-shoot state faster than waking the camera by pressing the shutter button. Because of that and the short battery life, I turn the camera off when carrying it around.

The Sony A7 has a built-in electronic viewfinder. The EVF is nice & sharp (it's a tiny 1920x1200 OLED display!) When you lift the camera to your eye, it switches over to the EVF. Or if you get your body close to the EVF. Or if you place the camera close to a wall. This can be a bit annoying, but you can make the camera use only the rear screen or the viewfinder. Note that the viewfinder uses a bit more battery than the rear screen. Probably not enough to show up in your shot count though.

If you have image preview on, it appears on the EVF after you take a shot. This can be very disorienting and slows you down, so I recommend turning image preview off.

The rear screen can be tilted up and down. That is a huge plus, especially with wide-angle lenses. You can put the camera to the ground or lift it above your head and still see what you're shooting. You can also use the tiltable screen to shoot medium-format style with a waist-level viewfinder. The rear screen has great color and resolution as well, it's just lovely to use.

The glass on the rear screen seems to be rather fragile. After two months of use, I've got two hairline scratches on mine. Buy a screen protector and install it in a dust-free place. Otherwise you'll have to buy an another one, ha! The camera is not weather-sealed either, be careful with it. Other than the rear screen glass and the battery cover, the build quality of the body is good. It feels very solid.

The A7 has some weird flaring, probably due to the sensor design. Bright lights create green and purple flare outwards from the center of the frame. This might improve center sharpness for backlit shots, but for night time shots with a streetlight in the corner of the frame it's rather ugly.

One nice feature of the A7 is the ability to turn on APS-C size capture. This lets you use crop-factor lenses on the A7. And it also lets you use your full-frame lenses as crop-factor lenses. For instance, I can use my 55mm f/1.8 as a 83mm f/2.5 (in terms of DoF, in T-stops it's still T/1.8, i.e. has the same amount of light hitting each photosite). I lose a stop of shallow DoF and a bunch of megapixels but get a portrait lens that uses the sharpest portion of the frame.

Speaking of lenses, my lineup consists of the kit lens (a 28-70mm f/3.5-5.6), the Zeiss FE 55mm f/1.8 and an M-mount 21mm f/1.8 Voigtländer, mounted using the Voigtländer VM-E close focusing adapter. If I use the primes with the APS-C trick, I can shoot at 21mm f/1.8, 32mm f/2.5, 55mm f/1.8 and 83mm f/2.5. Wide-angle to portrait on two lenses!

The Zeiss FE 55mm f/1.8 is an expensive, well-built lens that looks cool and makes some super sharp images. Sharp enough that for soft portraits I have to dial clarity to -10 in Lightroom. Otherwise the texture of the skin comes out too strong and people go "Oh no! I look like a tree!" Which may not be what you want. If you shoot it at f/5.6 or so, the sharpness adds a good deal of interest in the image. It's a sort of hyperreal lens with the extreme sharpness and microcontrast.

The Voigtländer Ultron 21mm f/1.8 is an expensive, well-built lens that looks cool and makes sharp and interesting images, thanks to the 21mm focal length. Think portraits with environment, making people look like giants or dramatic architectural shots. It's manual focus and manual aperture though, so you'll get a lot of exercise in zone focusing and estimating exposure. The Ultron is a Leica M-mount lens, so you need an adapter to mount it onto the A7. One minus on the lens and the adapter is that they're heavy. Size-wise the combo is very compact, but weighs the same as a DSLR lens at 530g.

For a wide-angle lens, the Ultron's main weakness is its 50 cm close focus distance. But on the A7, you can use the VM-E close focus adapter and bring that down to 20 cm or so. Which nets you very nice bokeh close-ups. But blurs out infinity, even when stopped down to f/22.

The Voigtländer on the A7 has some minor purple cast on skies at the edges of the frame. It can be fixed in Lightroom by tweaking purple and magenta hues to -100. The lens has a good deal of vignetting, which is fixable with +50 lens vignetting in Lightroom. When fixing vignetting at high ISOs, take care not to blow the corner shadows into the land of way too much noise.

The kit lens is quite dark at f/5.6 on the long end, and it isn't as sharp as the two primes, but it's quite light and compact. And, it has image stabilization, so you can get decently sharp pictures even at slow shutter speeds. Coupled with the good high-ISO performance of the A7 and the auto-focus light, the kit lens is usable even in dark conditions. I haven't used it much though, I like shooting the two primes more.

Battery life is around 400 shots per charge. Not good, but I manage a day of casual snapping. The battery door seems to be a bit loose, I've had it open a couple of times when I took the camera out of the bag.

The camera has a built-in WiFi that the PlayMemories software uses for transferring images to your computer or smartphone. You can even also tethered shooting over WiFi but I haven't tried that. Transferring RAW photos from the camera to a laptop over WiFi is very convenient but quite slow. Transferring JPEGs to a smartphone is fast, though you want to batch the transfers as setting up and tearing down the WiFi connection takes about 20 seconds. When you're shooting, you probably want to switch the camera to airplane mode and save batteries.

I ended up shooting in RAW. I couldn't make the JPEGs look like I wanted out of the camera, so hey, make Lightroom presets to screw with the colors. The RAWs are nice! Lots of room for adjustment, good shadow detail, compressed size is 25 megs. You have to be careful about clipping whites (zebras help there), as very bright whites seem to start losing color before they start losing value. The RAW compression supposedly screws with high-contrast edges, but I haven't noticed the effect.

Due to shooting primarily in RAW, I find that I'm using the A7 differently from my previous cameras. I used the Canon 5D Mk II and my compact Olympus ZX-1 as self-contained picture making machines. I shot JPEGs and did only minor tweaks in post, partially because 8-bit JPEGs don't allow for all that much tweaking. The A7 acts more as the image capturing component in a picture creation workflow. Often the RAWs look severely underexposed and require a whole lot of pushing and toning before the final image starts taking shape. The colors, contrast, noise reduction and sharpening all go through a similar process to arrive at something more like what I want to see. I quite like this new way of working, it requires more of me and further disassociates the images from reality.

The camera has 16:9 cropped capture mode, which is nice for composing video frames. The 16:9 aspect also lends the photos a more movie-like look. The resulting RAW files retain the cropped part of the 3:2 frame, so you can de-crop the 16:9 photos in Lightroom. Note that this doesn't apply to the APS-crop frames, the APS-size photos don't retain the cropped image data.

I found that I can shoot usably clear (read: noise doesn't completely destroy skin tones) photos at ISO 4000, which is really nice. ISO 8000 and up start losing color range, getting magenta cast in shadows and light leaks when shooting in dark conditions. Still usable for non-color-critical work and very usable in B/W.

Overall, I don't really know why I got this camera and the lenses. I shoot too few images and video to justify the expense. The quality is amazing, yes. And it's not as expensive as a Canon 5D Mk III or a Nikon D800 either. But it's still expensive, especially with the Zeiss FE lenses. I feel that it's a luxury for my use.

The Sony A7 is too large and heavy for a walk-around camera strapped across your back and too non-tool-like for a total pro camera. I was trying to replace my old Canon 5D Mk II with its 50 f/1.4 and 17-35L f/2.8 with something lighter. But this is not it. The A7 has great image quality but the workflow is slower. And while it's light enough to throw in the backpack and go, it's still large and heavy enough to make me not want to dig it out. For me it sits in the uncomfortable middle area. Not quite travel, not quite pro. For occasional shooting it's great! Especially if you get a small camera bag.

About Me

My Photo

Built art installations, web sites, graphics libraries, web browsers, mobile apps, desktop apps, media player themes, many nutty prototypes, much bad code, much bad art.

Have freelanced for Verizon, Google, Mozilla, Warner Bros, Sony Pictures, Yahoo!, Microsoft, Valve Software, TDK Electronics.

Ex-Chrome Developer Relations.