Please note: This distinguished lecture will take place in DC 1302.
Erol Gelenbe, Professor
Institute of Theoretical and Applied Informatics
Polish Academy of Sciences

The Random Neural Network (RNN) is a mathematical model that has the required “learning” ability of a neural network, since it is a universal approximator for continuous and bounded functions. In neural network terminology, it is a “recurrent” model in the sense that it can — in general — incorporate feedback loops, and yet still has a well-defined unique solution despite its non-linear computational structure. In essence, the RNN is a continuous time and discrete state-space multi-dimensional Markov chain whose states are the n-vectors {k}, of natural numbers, where each natural number represents the instantaneous “excitation level” or “discrete internal voltage” of each of the n neurons.
In this presentation we shall first define the RNN model and derive its Chapman-Kolmogorov (differential-difference) equations that characterize the underlying Markov chain. We will show that under certain conditions, it has a unique stationary solution that is obtained from an “exact non-linear mean-field equation”. Furthermore, similar to certain queueing networks (Jackson, BCMP) which have linear mean-field equations, the RNN has a Product Form Solution, so that its stationary probability distribution is the product of the marginal distributions associated to each individual neuron. The analytical structure we have described leads to an O(n^3) gradient-based deep learning algorithm, and to the use of other optimization techniques such as FISTA. Based on these results we will illustrate the use of the RNN for very diverse applications, such as patented anomaly detection from Magnetic Resonance Images, color texture learning and generation, reinforcement learning based packet network routing, and the detection of Botnets and other cyberattacks.
Bio: Erol Gelenbe, FIEEE, FACM, FIFIP, graduated from METU (Turkey), received his PhD at the Tandon School of NYU and the Doctor of Mathematical Sciences from the Sorbonne. His career includes the Universities of Michigan, Liège (Belgium), Paris, and INRIA, Ecole Polytechnique, Duke University, University of Central Florida, Imperial College, serving as a faculty member or Department Head.
His diverse prizes, honors and awards include the Parlar Foundation Science Award (1994), Turkey, the Grand Prix France Télécom (1996), the UK Oliver Lodge Medal for Innovation, Imperial College Rector’s Research Award, the ACM SIGMETRICS Life-Time Achievement Award, the Mustafa Prize (2017), and Honorary Degrees from University of Roma II, Bogaziçi University (Istanbul), and the University of Liège. He was elected to Academia Europaea, the French National Academy of Technologies, the Science Academies of Hungary, Poland and Turkey, the Royal Academy of Sciences, Arts and Letters of Belgium, the Science Academy of the Organization of Islamic States. He was awarded Chevalier de la Légion d’Honneur and Commandeur de l’Ordre national du Mérite by the President of France, Commendatore al Merito della Repubblica and Grande Ufficiale della Stella d’Italia by the President of Italy, Commandeur de l’Ordre de la Couronne by the King of Belgium, and the Cross of Officer of Merit by the President of Poland.
Erol has graduated over 90 PhD students including 25 women. Four of his former PhD students were elected to the national academies of Canada and France. Erol was recently elected to the Indian National Science Academy, that stated that Erol “is a pioneering researcher in Computer Systems and Networks. Using Markovian and semi-Markov methods he obtained several seminal analytical results regarding the page fault rates in large classes of memory management algorithms, he derived the stability and optimal control of the ALOHA communication system, and the load dependent optimal values of checkpoints for databases. He invented new modeling and analysis methods, including the G-Network model. He invented the spiking random neural network and its deep learning, auto-associative and reinforcement algorithms. His technological contributions include a patented optimal architecture for many-to-many communications, patented reinforcement learning routing for edge networks and the Internet, and the industrial simulation tool Flexsim.”