$3,000 – $3,900

[5 Day Training Course] Neural Networks and Deep Learning: Munich

Event Information

Share this event

Date and Time

Location

Location

Venue is being confirmed. Stay tuned!

Munich

Germany

View Map

Refund Policy

Refund Policy

Refunds up to 30 days before event

Event description

Description

Why this training?

In this five-day hands-on course is designed for data scientists seeking a better understanding and knowledge of main technology trends driving Deep Learning.

Attendees will get a clear understanding of Deep Learning technology, practical scenarios to build, train and apply algorithms of fully connected Deep Neural Networks, strategies to configure the key parameters in a neural network's architecture.

The participants will gain experience in building and applying Deep Neural Networks, with most popular frameworks, such as Keras, TensorFlow, Theano, scikit-learn.

After the course, you will become fully skilled of implementing deep learning for your own applications.




Who should attend?

The Neural Networks and Deep Learning training was developed for data scientists seeking a better understanding and knowledge of:

  • Main technology trends driving Deep Learning

  • How to build, train and apply fully connected deep Neural Networks

  • Key parameters in a neural network architecture optimization



Course objectives

Provide the information and, through labs, the experience necessary for the students to:

  • Gain a deep understanding of Deep Learning concept

  • Use main different types of Neural Networks

  • Build Deep Neural Networks applications




Program

Day 1

Introduction to Deep Learning

  • Agenda for the training.

  • General introduction for neural networks and deep learning.

Neural Networks

  • Components of an artificial neural network.

  • Connections and weights. Propagation function.

  • Choosing a cost function. Learning paradigms.

  • Types of neural networks.

Activation Functions

  • Identity function, unit step (binary step) function, sigmoid function, hyperbolic function, inverse trigonometric function, softmax function, rectified linear unit (ReLU), exponential linear unit (ELU), maxout.

Learning of Neural Networks

  • Training, test, and validation sets.

  • Selection of a validation dataset: holdout method and cross-validation.

  • Instance space decomposition.

Supervised Learning

  • Labeled training data.

  • Determination of the type of training examples.

  • Gathering a training set.

  • Determination of the input feature representation of the learned function.

  • Running the learning algorithm on the gathered training set.

  • Evaluation of the accuracy of the learned function.

Unsupervised Learning

  • Approaches to unsupervised learning.

  • Clustering, anomaly detection, Autoencoders, Generative Adversarial Networks, self-organizing map.

Reinforcement Learning

  • Markov decision process.

  • Algorithms for control learning.

  • Optimality criteria.

  • Brute force approach.

  • Value function approaches.

  • Monte Carlo methods.

  • Temporal difference methods.

  • End-to-end reinforcement learning.

    Day 2

    Perceptron

    • Initialization of the weights and the threshold.

    • Training of the algorithm.

    • Calculation of the actual output.

    • Multiclass perceptron.

    Multilayer Perceptron

    • Activation function.

    • Layers of nonlinearly-activating nodes.

    • Learning of the perceptron.

    Recurrent Neural Networks

    • Directed graph along a sequence.

    • Infinite and finite impulses.

    • Long short-term memory networks.

    • Fully Recurrent Neural Network.

    • Independently Recurrent Neural Network.

    • Recursive Neural Network.

    • Hopfield network.

    • Boltzmann machine.

    • Restricted Boltzmann machine.

    • Recurrent Multilayer Perceptrons network.

    • Multiple Timescales Recurrent Neural Network.

    Autoencoders

    • Dimensionality reduction.

    • Denoising Autoencoder.

    • Sparse Autoencoder.

    • Variational Autoencoder models.

    • Contractive Autoencoder.

    • Training algorithm for an Autoencoder.

    Generative Adversarial Networks

    • Generative and discriminative networks.

    • Error rate of the discriminative network.

    • Fully convolutional feedforward Generative Adversarial Network.

    Convolutional Neural Networks

    • Convolutional layers, pooling layers, fully connected layers and normalization layers.

    • Number of filters.

    • Filter shape.

    • Hierarchical coordinate frames

    Day 3

    Deep Learning in Computer Vision

    • Machines vision.

    • Applications of computer vision techniques.

    • Image acquisition.

    • Pre-processing images.

    • Feature extraction.

    • Detection and segmentation.

    • High-level processing.

    • Decision making.

    Image Classification

    • Image processing.

    • Supervised and unsupervised classification.

    Object Detection

    • Sliding windows.

    • Viola-Jones detector.

    • Region-based Convolutional Neural Networks.

    • Single shot detector models.

    Object Tracking and Action Recognition

    • Computer vision architectures for video analysis.

    • Optical flow estimation.

    • Visual object tracking.

    • Action recognition.

    Image Segmentation and Synthesis

    • Semantic image segmentation.

    • Image synthesis problems.

    Day 4

    Deep Learning in Natural Language Processing

    • Challenges in natural-language processing.

    • Statistical natural-language processing.

    Text Classification

    • Linear classifiers and deep learning techniques.

    • News flows classification.

    • Sentiment analysis.

    • Spam filtering.

    Language Modeling and Sequence Tagging

    • Prediction of next words techniques.

    • Methods to predict a sequence of tags for a sequence of words.

    Machine Translation

    • Word sense disambiguation.

    • Rule-based machine translation (MT).

    • Transfer-based MT.

    • Interlingual MT.

    • Dictionary-based MT.

    • Statistical MT.

    • Example-based MT.

    • Hybrid MT.

    • Neural MT.

    Dialog Systems

    • Architectures for dialog systems.

    • Input recognizer.

    • Natural language understanding.

    • Dialog manager.

    • Task manager.

    • Output generator.

    • Output renderer.

    • Types of systems and current frameworks.

    Day 5

    Meta-optimization

    • Meta-learning approach to neural network optimization.

    • Optimization strategy.

    • Optimization method benchmarks.

    • Combination of optimization methods.

    • Evaluation of methods.

    Neural Network Ensemble

    • Basic principles of Neural Network Ensemble.

    • Individual neural network generation methods.

    • Conclusion generation methods.



                  Prerequisites

                  Altoros recommends that all students have:

                  • Programming: Basic R and Python programming skills, with the capability to work effectively with data structures

                  • Experience with the RStudio and the Jupyter Notebook applications

                  • Basic experience with git

                  • A basic understanding matrix vector operations and notation

                  • A basic knowledge of Statistics

                  • Basic command line operations

                  A workstation with the following capabilities:

                  • A web browser (Chrome/Firefox)

                  • Internet connection

                  • A firewall allowing outgoing connections on TCP ports 80 and 443

                  The following developer utilities should be installed:

                  • Anaconda

                  • RStudio

                  • Jupyter Notebook



                  Payment info:

                  • If you would like to get an invoice for your company to pay for this training, please email to training@altoros.com and provide us with the following info:

                    • Name of your Company/Division which you would like to be invoiced;

                    • Name of the person the invoice should be addressed to;

                    • Mailing address;

                    • Purchase order # to put on the invoice (if required by your company).

                    The tickets are limited, so hurry up to reserve your spot NOW!

                  ! Please note our classes are contingent upon having 7 attendees. If we don't have enough tickets sold, we will cancel the training and refund your money one week prior to the training.Thanks for the understanding.





                  Share with friends

                  Date and Time

                  Location

                  Venue is being confirmed. Stay tuned!

                  Munich

                  Germany

                  View Map

                  Refund Policy

                  Refunds up to 30 days before event

                  Save This Event

                  Event Saved