Computer Science/Discrete Mathematics Seminar II

A practical guide to deep learning

Neural networks have been around for many decades. An important question is what has led to their recent surge in performance and popularity. I will start with an introduction to deep neural networks, covering the terminology and standard approaches to constructing networks. I will focus on the two primary, very successful forms of networks: deep convolutional nets, as originally developed for vision problems; and recurrent networks, for speech and language tasks. In each case we will discuss what are believed to be some of the main ideas underlying the current successes, and their history. I will then describe how these two approaches come together in combined vision/text applications, such as image captioning.

Date & Time

November 21, 2017 | 10:30am – 12:30pm

Location

S-101

Affiliation

University of Toronto; Visitor, School of Mathematics