Posts

Showing posts from March, 2021

What we'll (probably) see before we build artificial cultures

Image
Introduction  A common research goal in AI is to build a virtual society that is as complex and rich and interesting as human society. Human culture seems to have this really cool open-ended nature that we haven't been able to replicate in silicon yet, and we'd like to do that.  Right now, the research has been mostly around four things: - Grounded language - Variants of cooperative and competitive games - Making agents learn in more complex environments - Teaching agents to learn learning algorithms (meta-learning) However I wanted to get a better picture of what it'll look like when we start getting closer to human culture. Of course, we can't know the answer, and AI tends to break our expectations. Still, I'd like to form a better hypothesis* to try and guide my future research in this direction. The most natural thing to do is to look at what happened for humans. This is where theories of evolutionary history of culture comes in. There's a pretty big debate...

Continuing building the research mountain

(not my biweekly post, just some thoughts I've had about research) Previous posts: Beware the important problem: aka Scope Creep in Research ,  Letting the problem shape the direction you go , first part of  Research Processes and Inductive Linguistic Biases My outside view of research was that it was like "slowly chiseling away at problems, breaking off pieces of them until eventually we break down the whole thing". But these days, I feel like a much better analogy is "building mountains". We have some really high up cloud we are trying to reach, in a pretend world where clouds are fixed and don't move. We can't jump up there immediately (there's no viable approach), so we need to start with some simple sub problem. It makes sense to pick the smallest sub problem you can find, a minimum viable product, some tiny spec of cloud slightly off the ground. You build a small mound around that research direction, and now you can reach a little higher. You l...

Lenses on Analysis of Opinion Spread

(This is gonna be a post I keep updating as I learn new lenses, so this is a working document that'll never be complete. The goal of this research direction (for me) is ultimately to figure out what we need to get AIs to have a society of comparable complexity to human society) There are a few different ways of "looking at" opinion spread. Some of these are subsets of the others, and others can be usefully combined, but each perspective still provides a set of unique insights that aren't provided by the others. I'll mostly constrain myself to talking about social media, but when the insights carry over into other places I'll try to mention that as well. Note that when I talk about "opinions", precisely what I mean by "opinion" is fairly loose. Much of this post is describing the attempts I'm familiar with so far of trying to formalize that notion.  Distributional Lens The simplest way to do this is to train a language model on the set o...