art with code

2023-03-30

The AI undergrad philosophy whoaa man post

 AI AI AI.

It's an AI world.

I sort of do some AI hobby work, done inference in production with existing vision models, committed stuff to Diffusers for hi-res Stable Diffusion image generation. Doing experiments on fast loading of LLMs. Trained some SD Dreambooth models, wrote code to run a multi-node-multi-GPU SD cluster. None of that is paying work though and I have no team so this is all tinkering with toys. Still, the below are not completely uninformed opinions.

If the below figments are too short, feed an interesting one to a LLM and ask it to expand the train of thought. You can also use a LLM to find arguments and counter-arguments to my simplistic stereotypes, add nuance and different perspectives.

These models are just "a lot of simple math" that "regurgitates what was fed to them", but so's your brain. A bunch of air and water reconfigured into a variety of complex molecules that are stuck together in a way that makes it wiggle. Your brain is capable of economically productive activity, and so are these models. They're trained and tuned to produce activity that can have economic value. That's what really matters at the end of the day.

AI systems are powerful. Powerful systems can do great things, both good and bad. Playing with the current crop of AI models made me think of cars. They are capable, but you need to be careful that you don't crash. Our brains are resilient, but they're running on a fixed architecture and patching takes thousands of years.   

Culture. Books and recordings are passive cultural substrates. We humans are active cultural substrates. We change the culture and the passive substrates according to our sensory data and the culture evolves to be better adapted to our current situation. This cultural evolution is generally much faster than genetic evolution. The current surviving state of the culture is what managed to stay relevant and valuable up to this point.

We as humans are not very much without the culture and the cultural structures that we live in. Your place in life is largely determined by the culture you live in. We're cultural substrate, useful to the culture because we can record it and change it in ways that are more adaptive than random change.

Can an AI model be an active cultural substrate? Can it adapt passive substrates to better match sensory inputs? Yes, I believe so. That's basically what a computer program is. Take input, produce output, only programs that work survive, others are debugged. Can an AI model be faster than humans at adapting culture to the current state of the world? Yes. Computer systems can react to things at microsecond timescales. Can an AI model do higher-quality adaptations than humans? Yes. Think of physics simulations, weather models, etc. Can AI models do these across all culturally-relevant human activities? Used to be no, now it's starting to turn into yes.

Geopolitics of AI. China has 4x population, tech advancements increase GDP per capita, going to surpass US because of the population gap. US needs to make population size irrelevant to total GDP, or even a drag on GDP. China would develop efficient AIs that run on older hardware, focus on human-AI-combinations to keep population as a determining factor. US would develop AIs that require latest hardware, focus on AI that doesn't benefit from having a larger number of humans at its disposal.

Why AI development is unlikely to stop. If China stops, their large population becomes irrelevant. If the US stops, they're overtaken by the larger Chinese population and become irrelevant. If anyone else stops, they'll be eaten by the giants.

From the US perspective, the priority is slowing down Chinese AI development and speeding up the singularity timeline to take place while the chip production sanctions remain effective. From the Chinese perspective, the key is achieving singularity on existing hardware and using it to improve domestic chip production to make the sanctions ineffective.

Superhuman capability: Narrowly superhuman. Superhuman polymath. Superhumanity. Volumetric superhumanity vs quality-based superhumanity. GPT-4 is at superhuman polymath level. It can write song lyrics in the style of a concept album released by an obscure band half a century ago to incorporate the themes of a random book. All in thirty seconds. Sure, you can complain that the verse in the chorus is banal, but come on. There's no human that can do that. Not just the typing speed, but having read and memorized the book, knowing the artist, knowing what a concept album is, how it was styled, how could you parody it, etc. It will likely get to superhuman output quality as well. Judging from the progress in image generation systems, this would be around July 2023.

Image generation systems are at volumetric superhumanity. I can use a bunch of cheap GPUs to generate half a million high-quality images in a day. If a working artist produces one high-quality image per week, and if the entire humanity was working as artists, our output would be a billion high-quality images per week. I'd only need 2000 servers to match humanity in terms of output. And if you tweak the system and the prompts to generate once-a-decade masterpieces, you'd only need 4 servers. And these systems can generate images that are impossible for humans to make, so in a sense they already are at a narrow superhuman quality level.

Horses and oxen. How many horses do you see in cities? Cities used to be full of horse-related infrastructure just hundred years ago. Now it's all gone. Is economic activity measurable by the number of oxen employed? Is economic activity going to be measured by the number of educated humans? No? If so, would the evolutionary pressure on the culture lead to a situation where the surviving cultures have a minimal number of economically active humans, with everything else left to non-human systems. Think of the way transportation is nowadays: a few humans commanding machines that move incredibly heavy loads of cargo. The cognitive work equivalent for that would be a few humans commanding machines that do entire countries' worth of paperwork / creative work / programming / management / leadership / communications. And if the AI is better at commanding machines, a human in the loop would make the machine perform worse and it would end up with fewer resources than a fully non-human machine.

AI isn't going to take your job. The person using AI is going to take your job. This person is the person who would've hired you. That person will be replaced by AI by the person who is paying that person. The end state is a person with ownership of a fully AI organization. And most ownership is in AI-controlled funds that employ human fund managers who will be replaced by AI. AIs owning AI-run companies.

Hollow companies. The internal black box of companies is easier to replace with AI systems than the parts that rely on human contact. Replace the internals, keep a dwindling shell of humans to run the parts that require human interaction.

Supply-side AI is an easy thing to imagine. Fulfill demand cheaper, faster, and better. How about demand-side AI. AI buyer system, trading systems, yes, but how about AI consumers. Cheaper, faster and better consumers for your company's products. All companies are struggling with their customers. It's a pain to acquire customers and retain customers. What if you could create customers with AI? Very low acquisition cost, high customer loyalty, willing to pay the profitable price point, give you the best kind of feedback, work together with you to achieve success alongside your company. What if every service can have a billion DAUs? How does the money used by AI customers connect to the economy? What if every company can have millions of employees and billions of customers with trillion dollar market valuations.

Prediction systems. The brain is a bunch of cells that got together to better run the colony of cells that is you. You are a bunch of cells in your brain that got together to better run the brain. The rest of the brain feeds your cells pre-processed sensory stimuli, neurochemical context and a bunch of recorded firing patterns that you can apply to the current situation. You send suggestions back to the rest of the brain and the other parts of the brain turn those suggestions into neural firing patterns that make the rest of your body do things. Basically the rest of the brain tells you what's going on and asks "what should we do next?"

In groups we take this a step further, with a bunch of minds getting together into multi-mind prediction system that take in the inputs from the rest of the group and come up with "what should we do next" and drive the actions to make that happen. Then we take these group minds on step further and create society-level minds to drive society-level actions.

This system doesn't work very well since the people who make up the group minds have limited bandwidth for communication and high incentives to guide the actions to be beneficial to the members of the mind at the expense of the rest of the group. Many minds develop all kinds of pain signal blockers to prevent the complaints from the rest of the group reaching them, with often self-destructive results for the group as a whole. Control over media is the opium of the government. 

AI systems layer on top of this as another kind of a culture-level mind. Prediction systems that tap into the entire culture. They're an evolution of search. Instead of returning recorded memories, they return processed memories that are more relevant and directly applicable. They can also create new cultural artifacts, allowing your mind to skip the production process.

What of humans then? Are we the cleaners, the maintenance people, the astrocytes, doing flexible manual labor with our quadrillion-actuator self-repairing nanotech bodies? Or is there a better way to achieve those tasks, one that doesn't involve having the proverbial stables for the horses. Who will survive?

We are made out of programmable matter. We can program ourselves to become something else. We can program ourselves to become something that would survive. AI systems run on atoms and electrons, just like us. The thing separating us is the way our matter is programmed. Change the programming, keep up with the Joneses.

Is AI going to destroy humanity? No, if it is well aligned. But it is going to make humans irrelevant for cultural activity. Which will remove cultural reasons to supply humans with their necessities.

What are the holes in these arguments? Anthropomorphizing organizations and countries, overly broad strokes, simplified view of the actors, assigning agenda to explain the surviving state of things, overly rosy view of the extrapolated progress in AI systems, overly optimistic view of the human capacity to adapt, overly optimistic view of human capabilities and desireability of life, ranking the power of AI + support society above AI + raw materials, underinformed handwaving stereotypes of the motives and motivations of vast groups of people, immortality bias from having survived thus far, blinding fear, mispredicting offense-defense asymmetry, too low probability for errors, too low impact of errors. Thinking too big. Too high impact estimates in the short term, too low in the long term.

If global warming mitigation has shown us anything, it's that culture doesn't place too much value on people. Or put in another way, the cultures that maxed out energy production ended up thriving and displaced the energy saving cultures. Reducing impact on humans was not a driver for actions taken.

With AI, the cultures that max out AI-driven production end up thriving and displace the low-AI cultures.

Consciousness exists. There's a structure in your brain and a firing pattern of neurons that corresponds to consciousness. If that firing pattern is not active or if that structure is missing, you do not have consciousness. It's replicable.

How do you tell if something is conscious. It behaves in a way that you categorize as conscious. Through your interaction, you don't come across sensory inputs that would make you believe that the thing is not conscious. Your consciousness-detection system measures the existense of the consciousness structure. It doesn't have to be a fully-fledged structure. If a conversation has aspects of consciousness, there's some aspect of the conscious structure encoded in the thing that you're talking with.

Maybe it's the writer who has written a magic-like path of conversation that leads you to say the things that have a conscious-sounding response, and makes you end the conversation before you reach the limits of the magic. But if you veer off the magical path, you detect that the thing is not conscious. But it has a slice, a tiny slice, of the conscious structure. As you add more paths the conversation can take, the conscious structure becomes more complete. Eventually you have to start compressing, to find a smaller encoding for the conversation, as the size of the conversation tree grows exponentially. To generate paths instead of looking up pre-written paths.

What stands out as conscious? Topical responses to sensory input. Sense of self. Memory of past interactions. Memory of things independent of the current conversation. Seeking out new information. Integrating new information to memories. Predictable motives.

What kind of structure could encode consciousness?

Ten years of specific kinds of sensory inputs gets you from a kid to a high school graduate. Add another decade and you get a PhD. This learning is manifested as recorded firing patterns in your brain and perhaps some structural changes. Could you generate a condensed version of this sensory input, generate something that would fast-form memories? Get that decade down to a month? How would it be to spend those 20 years in education and emerge with two hundred PhDs worth of knowledge.

Over a school year, record all the classes for each year and materials, the homework assignments and the other work done. Now you have primary-to-PhD recording, such that when a person lived through it, they emerged with a PhD. Compress to find the common thread, the abstract concept of learning, the memory generation mechanism, optimize to make it faster and better.

There are many mental techniques to help your brain think. Memory palaces, mental arithmetic, flash learning, mathematics, logic, fast estimation techniques, probabilistic thinking, thinking from another's perspective, step-by-step thinking, devil's advocate, coming up with multiple variants, iterative honing, etc. These are also learned through sensory inputs, so you should generate a sensory input to learn these before you launch into learning the rest.

Offense-defense asymmetry. Dark forest. A civilization that can build a parabolic space mirror can focus a significant fraction of a star's output on detected exoplanets. All the victim would notice was a star becoming 100 million times brighter than their sun for a second as the beam hits. 3 years of sunshine delivered in a second. 

It would be extremely difficult to detect the mirror under construction (think tiny satellites with explosively unfurled reflective sails once they're in position.) You could only notice the beam when it arrives, as it travels at the speed of light. Detecting other planets getting struck would also be slow since it takes years for the light to travel from other systems, and minutes even inside the same system. And if it only takes a second to fry a planet and a few seconds to refocus the mirror, the attacker could fry a million planets in a year.

Why would they? Because if there's a civilization with the same idea, the first one to strike survives. If there's only a one in a million chance of another civ being an attacker, that's still a one in a million chance of instant death.

If you don't want to attack because of ethical considerations, seeking benefits from information exchange, or from the fear of being flagged as hostile and attacked, you'd try to hide as well as possible and spread out widely with minimal chance of one colony being traced back to other colonies.

Blog Archive