In the last week or so, I have encountered lot of discussion about the failure of data journalists (mostly the good folks at fivethirtyeight.com) to predict Trump's nomination to the Republican Ticket. In fact, that's understating it a little bit, they were quite confident that Trump would not be elected - famously Nate Silver put his chances of winning around 2 percent. In a recent podcast and 538 article, Nate Silver did some interesting post-mortem on the analysis. In part, he critiques his own methods and in part chastises himself for issuing a subjective prediction that did not come from a computational model. For this, he states that in this particular instance, he acted like a pundit. He was too focused on his own priors and underestimated the uncertainty due to a small sample size of "Trump-like" candidates. At the same time, he does defend his use of empirical approaches.
Saturday, May 21, 2016
Sunday, May 15, 2016
Analyze That
One of the things I often enjoy doing with my friends is thinking through some political, policy, economic, or business problem. Sometimes this an issue in the news, sometimes it's something that one of us recently read about or heard about on a podcast. Other times, it's some random topic that we happened to stumble onto over the course of a conversation. Either way, we generally just have a good time breaking such a problem down. We often jokingly refer to this as "consulting the shit" out of a problem.
Tuesday, May 10, 2016
Neuroses
A few weeks ago I posted about the difference between machine learning and econometrics. Though I talked a lot about the applications of the two techniques, I tried to avoid getting very detailed about any of the algorithms involved. Recently, I've also been doing a deep dive on one of these machine-learning algorithms: Neural Networks.
About two years ago, I started hearing a lot about "deep learning" and a powerful algorithm called a neural network. These things seem to be everywhere: from Siri and to facial recognition. I must admit, somewhat embarrassingly, it took me quite some time to figure out what exactly a neural network was.
Everything that I read, when trying to understand the Neural Network, suffered from one of two problems. Some pieces just weren't that technical, and described the analogy of his algorithm to a human brain. They talked about things called "hidden layers" without telling me what was actually going on in them. The other type of post had a lot of math very quickly. It's not that I couldn't understand the math, but I wanted the high-level summary technical summary, knowing I would work through the math later.
Everything that I read, when trying to understand the Neural Network, suffered from one of two problems. Some pieces just weren't that technical, and described the analogy of his algorithm to a human brain. They talked about things called "hidden layers" without telling me what was actually going on in them. The other type of post had a lot of math very quickly. It's not that I couldn't understand the math, but I wanted the high-level summary technical summary, knowing I would work through the math later.
But, two things became very clear. First, these the "blackest box" of the machine learning algorithms we have. Everything about inference and decision-making in the last post does not apply when using the NNs. Second, they are really really powerful. They are extraordinarily good at addressing some of the toughest data science problems. Given their recent success, they aren't going anywhere.
So it was time to learn!
Subscribe to:
Posts (Atom)