Posts

Showing posts from December, 2020

Letting the problem shape the direction you go

In my November 6 blog post , I talked about my general approach for trying to solve problems and formalize fuzzy ideas. In summary, it has two pieces that you alternate between: - Do lots of reading, thinking, and brainstorming about tools and perspectives you could use - Create formal, concrete proposals. Implement them, try them out. They'll probably have an issue somewhere, or be missing some important part of the idea you care about, but that's okay. You repeat this process and continually get closer to your goal. However, I've come to realize that this description is incomplete. It's a good approach when the tools already exist, and what you are trying to do is just a few steps away from the existing set of tools. The reading and thinking help you understand what tools are available and how they relate to your problem, and the formalizing is about trying out tools and seeing if they actually do what you want. Yet, sometimes the tools don't already exist. The pr

Research Direction

My research direction that I've chosen is what I've been discussing previously: I want to try and make synthetic language tasks that can help transfer performance on real world language tasks. In this post I'm going to give a motivation of this problem from a few different perspectives: Transfer Learning, and Understanding Inductive Biases. Transfer Learning When you want to teach a model a skill, one way to do this is by feeding the model labeled (input, output) pairs. If you don't give it enough data, it'll be confused about what you wanted and won't properly learn the task. Eventually you can give it enough data and it'll figure out what you are trying to get it to do, allowing it to generalize to data it hasn't seen. One way to think of this is that the model starts out with a hypothesis space of "here are all possible things they might be trying to teach me", and as you feed it data, it can eliminate hypotheses. Eventually it's thrown