Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Newton Howard, Head of the Neuroscience Computing Lab at the Nuffield Department of Surgical Sciences, is actively involved in organising this year's Extreme Learning Machines (ELM) conference.

The ELM conference provides an annual forum for academics, researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of biological learning and the ELM technique.  

ELM2016 will focus on Big Data, Hierarchical Machine Learning and Biological Learning.

ELM is an exciting new field of AI, as it represents a suite of (machine or possibly biological) learning techniques in which hidden neurons need not be tuned. ELM theories show that very effective learning algorithms can be derived based on randomly generated hidden neurons (with almost any nonlinear piecewise activation functions), independent of training data and application environments. Increasingly, evidence from neuroscience suggests that similar principles apply in biological learning systems. ELM theories and algorithms argue that “random hidden neurons” capture an essential aspect of biological learning mechanisms as well as the intuitive sense that the efficiency of biological learning need not rely on computing power of neurons. ELM theories thus hint at possible reasons why the brain is more intelligent and effective than current computers. ELM offers significant advantages over conventional neural network learning algorithms such as fast learning speed, ease of implementation, and minimal need for human intervention. ELM also shows potential as a viable alternative technique for large‐scale computing and artificial intelligence.

Topics at ELM2016 include:


  • Universal approximation, classification and convergence
  • Robustness and stability analysis
  • Biological learning mechanism and neuroscience
  • Machine learning science and data science


  • Real‐time learning, reasoning and cognition
  • Sequential/incremental learning and kernel learning
  • Clustering and feature extraction/selection/learning
  • Random projection, dimensionality reduction, and matrix factorization
  • Closed form and non‐closed form solutions
  • Hierarchical solutions, and combination of deep learning and ELM
  • No‐Prop, Random Kitchen Sink, FastFood, QuickNet, RVFL, Echo State Networks
  • Parallel and distributed computing / cloud computing


  • Time series prediction, smart grid and control engineering
  • Pattern recognition
  • Social media and video applications
  • Biometrics and bioinformatics
  • Security and compression
  • Human computer interface and brain computer interface
  • Cognitive science/computation
  • Sentic computing, natural language processing and speech processing
  • Big data analytics

The ELM2016 Conference is being organised and sponsored by Nanyang Technological University (Singapore), the University of Oxford and Tsinghua University (China).  

For more information and to register, please visit:

For information about the first call for papers, please visit: