- Published on
NIPS 2017 Itinerary
- Authors
- Name
- Martin Andrews
- @mdda123
Monday 2017-12-04
Uber at 6:35 (shared) arrive ~ 6:55 registration
8:00 start of "Deep Learning: Practice and Trends", Nando de Freitas · Scott Reed · Oriol Vinyals in Hall A
- Presentation slides...
10:45 start of "Statistical Relational Artificial Intelligence: Logic, Probability and Computation", *Luc De Raedt · David Poole · Kristian Kersting · Sriraam Natarajan" in Hall C
14:30 start of "Engineering and Reverse-Engineering Intelligence Using Probabilistic Programs, Program Induction, and Deep Learning", Josh Tenenbaum · Vikash K Mansinghka in Hall C
Tuesday 2017-12-05
10:40 Track on Algorithms
12:00 Singaporeans@NIPS for hotdog lunch
1:50 Invited Talk: Kate Crawford: The Trouble wih Bias
2:50 Track on Algorithms, Optimization and Theory
- "Bayesian Optimization with Gradients" - Jian Wu, Matthias Poloczek, Andrew Gordon Wilson, Peter I. Frazier
4:20 Track on Deep Learning Applications
6:00 Light snack (pulled pork burgers) for poster viewing and Demonstrations
- "Modulating early visual processing by language" - Harm de Vries, Florian Strub, Jeremie Mary, Hugo Larochelle, Olivier Pietquin, Aaron C Courville
- "One-Sided Unsupervised Domain Mapping" - Sagie Benaim, Lior Wolf
- "Deep Voice 2: Multi-Speaker Neural Text-to-Speech" - Andrew Gibiansky. Sercan Arik, Gregory Diamos, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, Yanqi Zhou
- "Dynamic Routing Between Capsules: - Sara Sabour, Nicholas Frosst, Geoffrey E Hinton
9:00 Intel Party with Flo Rida (line too long, room at capacity)
- "Learned in Translation: Contextualized Word Vectors" - Bryan McCann, James Bradbury, Caiming Xiong, Richard Socher
- Many more...
Wednesday 2017-12-06
9:00 Invited Talk: Lise Getoor: The Unreasonable Effectiveness of Structure
10:20 Track on Deep Learning (mostly GANs)
- "Unsupervised Image-to-Image Translation Networks" -Ming-Yu Liu, Thomas Breuel, Jan Kautz
- "End-to-end Differentiable Proving" - Tim Rocktäschel, Sebastian Riedel
1:50 Invited Talk: Pieter Abbeel: Deep Learning for Robotics
2:50 Track on Reinforcement Learning, Deep Learning
- "Imagination-Augmented Agents for Deep Reinforcement Learning" - Sébastien Racanière, Theophane Weber, David Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
- "A simple neural network module for relational reasoning" - Adam Santoro, David Raposo, David Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Tim Lillicrap
- "Attention is All you Need" - Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin
4:20 Track on Reinforcement Learning, Algorithms, Applications
7:00 Poster session and Demonstrations
8:00pm Salesforce Party : Cafe Sevilla at 140 Pine Ave in Long Beach
Thursday 2017-12-07
9:00 Invited Talk: Yael Niv: Learning State Representations
9:50 Invited Talk: Yee Whye Teh: On Bayesian Deep Learning and Deep Bayesian Learning
11:10 Track on Deep Learning, Algorithms
2:00 Symposia https://sites.google.com/view/deeprl-symposium-nips2017/
- https://sites.google.com/view/deeprl-symposium-nips2017/
- David Silver : AlphaZero
- https://sites.google.com/view/deeprl-symposium-nips2017/
4:30 Symposia
7:30 Symposia
Friday 2017-12-08 (Hectic Workshop timeslicing)
Visually-Grounded Interaction and Language (ViGIL)
102B :- 08:45 AM : ViGIL : Visually Grounded Language: Past, Present, and Future… Raymond J. Mooney
- 09:30 AM : ViGIL : Connecting high-level semantics with low-level vision. Sanja Fidler
- http://www.cs.utoronto.ca/~fidler/publications.html
- Understanding videos through MovieGraphs
- 10:15 AM : ViGIL : Coffee Break & Poster Session + Meet GDEs
- Poster : ME!
- 10:40 AM : ViGIL : The interface between vision and language in the human brain Jack Gallant
Conversational AI: "Today's Practice and Tomorrow's Potential"
202 :- 11:40 - 12:00 : ConversationalAI: Multi-Domain Adversarial Learning for Slot Filling in Spoken Language Understanding
6th Workshop on Automated Knowledge Base Construction (AKBC)
102C :- 12:00 - 12:30 : Sebastian Riedel Reading and Reasoning with Neural Program Interpreters
Visually-Grounded Interaction and Language (ViGIL)
102B :- Bengio : interested in building Tamagotchi-like interactive Baby language learner system
- 14:00 - 14:30 : Dialogue systems and RL: interconnecting language, vision and rewards. Olivier Pietquin
6th Workshop on Automated Knowledge Base Construction (AKBC)
102C :- 14:30 - 14:45 : Contributed Talk: Go for a Walk and Arrive at the Answer: Reasoning Over Knowledge Bases with Reinforcement Learning
Visually-Grounded Interaction and Language (ViGIL)
102B :- 15:15 PM : Coffee Break & Poster Session
- 15:45 Talk to Yoshika & Mu-chan at 15:45 (check sanity at 14:45)
- 15:40 PM : Grounded Language Learning in a Simulated 3D World. Felix Hill (DeepMind)
- 16:25 PM : (jet-lag induced snooze to) How infant learn to speak by interacting with the visual world? Chen Yu
- 17:10 PM : Panel, including McClelland (PDP book author)
Saturday 2017-12-09 (Hectic Workshop timeslicing)
Deep Learning: Bridging Theory and Practice
Hall A :08:45 - 09:15 : Yoshua Bengio: Generalization, Memorization and SGD
- Cyclic learning rates++
09:15 - 09:45 : Ian Goodfellow: Bridging Theory and Practice of GANs
- Make problems convex by making them more complex?
- Damping idea Naranajam(?) 2017 (seen as a spotlight)
- Heusel et al, 2017 - two learning rates for G+D\
- Decay learning rate for G (says Goodfellow)
NIPS HIGHLIGHTS, LEARN HOW TO CODE A PAPER WITH STATE OF THE ART FRAMEWORKS
202 :09:45 - 10:05 : Tips and tricks of coding papers on PyTorch - Sumith Chintalla, Facebook
- Start with experiments for basic outline
- Choose the simplest (non-MNIST) dataset to start with
- Weight initialisation details important
- Hyperparameters are often hidden
- Profile code (eg: lineprofiler)
10:05 - 10:25 : Differentiable Learning of Logical Rules for Knowledge Base Reasoning - Fan Yang, CMU
- TensorFlow (sparse matrices too)
- Code is online : github.com/fanyangxyz/Neural-LP
- batch_size is a problem, since whole KB also nees to be in memory
10:15 - 11:00 : Morning Posters
10:45 - 11:15 : Coding Reinforcement Learning Papers by Shangtong Zhang , University of Alberta
- Better to implement from scratch : Improve own understanding
- Start without all the tricks (bells & whistles)
- Patience :
- A3C + Atari = 1hr
- DQN + Atari = 1 day
- Async Q/SARSA + Atari = n days
- No random seed...
- Has now moved off TF to PyTorch, but still likes TensorBoard
- np.argmax(values + rand()) to avoid consistent choice of argmax index
- For Linear value-function approximation : Suggest Tile coding :
tile3
- Share CNN lower layers for separate Actor-Critic output final layers
- Roboschool is Free version of MoJoCo
- Actor/Critic - even simple networks work v well
11:15 - 11:35 : A Linear-Time Kernel Goodness-of-Fit Test (NIPS 2017 Best paper) - Wittawat Jitkrittum, GATSBY
- Simple procedure to find worst-fit location between given density and data
- Code available : github.com/wittawatj/kernel-gof
11:35 - 11:55 : Imagination-Augmented Agents for Deep Reinforcement Learning - Sébastien Racaniere, DeepMind
- Sonnet code - completely compatible with TensorFlow
- Extensive explanation of all the different components of the imagination structure
11:55 - 12:15 : Inductive Representation Learning on Large Graphs - Will Hamilton, Stanford
- github.com/williamleif/GraphSAGE
- Able to generate embeddings/feature representations as graph changes dynamically
- Aggregate operator : Permutation invariant NN (could be DeepSets)
- Large wins over pure features, or Node2Vec
- Collaboration with Pinterest - better accuracy, etc on 3bn node and 12bn edges
- Problem : batching on GPU...
- Sample fixed-sized neighbourhoods (say, size 3) randomly with replacement
- Good scaling performance (no need to do lots of samples, eg gains max out quickly)
- Depth of 2 or 3 works fine (no need to have larger depth in hops)
- Represent node-node lists densely, choosing at random, rather than sparsely
- Do ~tf.shuffle, and ~tf.slice to sample nodes rather than numpy
12:30 - 14:00 : Lunch (food trucks)
Workshop on Meta-Learning (MetaLearn 2017)
Hyatt Hotel - Beacon Ballroom D+E+F+H :- 13:30 - 14:00 : Learn to learn high-dimensional models from few examples (Omniglot) - Josh Tenenbaum
Cognitively Informed Artificial Intelligence
104A :- 14:00 - 14:25 : From deep learning of disentangled representations to higher-level cognition Yoshua Bengio (U. Montreal)
- Conciousness prior (more slides exist...)
Learning Disentangled Representations: from Perception to Control
203 :- 14:30 - 15:00 : Exploring the different paths to achieving disentangled representations - Pushmeet Kohli
15:00-15:30 Afternoon Posters
- 15:30 - 16:00 : Priors to help automatically discover and disentangle explanatory factors - Yoshua Bengio
- 16:00 - 16:30 : Generalized Separation of Style and Content on Manifolds: The role of Homeomorphism - Ahmed Elgammal
LEARNING WITH LIMITED LABELED DATA: WEAK SUPERVISION AND BEYOND
Grand Ballroom B :16:15 - 16:45 : Invited Talk: Sameer Singh
- Formulation of relationship rules easier for people to annotate than individual relationship labels
16:45 - 17:15 : Invited Talk: Overcoming Limited Data with GANs - Ian Goodfellow
- Add labels : Discriminator has n+1 classes (all classes, plus Fake)
- Simulate training data (make 3d renders more 'realistic', eg: eye gaze direction)
- Domain adaptation (professor forcing for text)
17:15 - 17:30 : Local Affine Approximators of Deep Neural Nets for Improving Knowledge Transfer - Tatjana Chavdarova (on behalf of Suraj Srinivas)
- eg: "Parallel WaveNet" Oord et al 2017 - student network now deployed to phones...
- Previous work (eg: Hinton) student matches values of teacher
- In this work student also matches gradients too) of teacher - Sobolev training for neural networks (NIPS 2017)
17:30 - 17:45 : Co-trained Ensemble Models for Weakly Supervised Cyberbullying Detection - Elaheh Raisi
- Possibly relevant to media monitoring projects
17:45 - 18:15 : Invited Talk: What's so Hard About Natural Language Understanding? - Alan Ritter
- See /r/SiriFail
- Twitter : 0.5bn conversations every month
- Adversarial Learning for Neural Dialogue (also uses REINFORCE) - better human approval rate
- Distant supervision (eg: Resolve time expressions)