Posts

Showing posts from February, 2023

Inductive Linguistic Biases

Image
(unfinished blog post, posting for reference) Introduction Humans have "inductive biases". When humans see data that they could draw multiple conclusions from, they often pick one of those conclusions. There are certain kinds of conclusions that humans are more likely to come up with, and those conclusions are our inductive biases. For example, humans tend to assume the sequence (1,2,3) will be followed by (4,5,6,7,...) even though there are many other conclusions you could come to. These inductive biases are very important for understanding how humans can learn from so little data: many mechanisms in our society involve learning to coordinate , instead of learning the "correct" behaviour. Often these norms determined based on the previous generation's "best guesses", and since you share the same inductive biases, you are likely to make those same guesses, and so can learn to coordinate easier. This means that if you can create AI models that have ve