Oscar Cordón was the Founder and a Leader of the Virtual Learning Center (2001-05) and the Vice President of Digital University (2015-19) with the University of Granada (UGR). He was one of the Founding Researchers with the European Centre for Soft Computing (2006-11), being contracted as Distinguished Affiliated Researcher until December 2015. He is currently a Professor with the UGR. He has been, for ~30 years, an internationally recognized contributor to Research and Development Programs in fundamentals and real-world applications of computational intelligence. He has published ~400 peer-reviewed scientific publications, including a research book on genetic fuzzy systems (with ~1500 citations in Google Scholar) and 123 JCR-SCI-indexed journal papers (75 in Q1 and 44 in D1), advised 22 Ph.D. dissertations, and coordinated 41 research projects and contracts (with an overall amount of >10M€). From Jan 2023, his publications had received 6004 citations (h-index=41), being included in the 1% of most-cited researchers in the world (source: Web of Science), with 16309 citations and h-index=62 in Google Scholar. He is also included in the Top 2% of the most cited researchers in the world in the area of Artificial Intelligence (source: ‘Ranking of the World Scientists: World’s Top 2% Scientists’, University of Stanford-Elsevier). Besides, he has a granted international patent on an intelligent system for forensic identification commercialized in Mexico and South Africa.
He received the UGR Young Researcher Career Award (2004), the IEEE Computational Intelligence Society (CIS) Outstanding Early Career Award (2011, the first such award conferred), the IFSA Award for Outstanding Applications of Fuzzy Technology (2011), the National Award on Computer Science ARITMEL by the Spanish Computer Science Scientific Society (2014), the IEEE Fellow (2018), the IFSA Fellowship (2019), the Recognition for Scientific Career and Promotion of Artificial Intelligence by the Spanish Association for Artificial Intelligence (2020), the IX ICT Spanish University of Universities (CRUE TIC) IT Professional Career Award (2022) and the Granada Ciudad de la Ciencia y la Innovación Award to Knowledge Transfer (2022), among other recognitions. He was a member of the High-Level Expert Group that developed the Spanish R&D Strategy for Artificial Intelligence by the Spanish Ministry of Science, Innovation and Universities (2018-19). He is currently or was Associate Editor of 19 international journals. He was recognized as an Outstanding Associate Editor of IEEE Transactions on Fuzzy Systems (2008) and of IEEE Transactions on Evolutionary Computation (2019 and 2021). Since 2004, he has taken many different representative positions with EUSFLAT and the IEEE Computational Intelligence Society.
His current research lines are on artificial intelligence for forensic identification (with the UGR Physical Anthropology lab and several international forensic labs and security forces) and agent-based modeling and social network analysis for marketing (with R0D Brand Consultants in projects for CAPSA, Mercedes, Jaguar-Land Rover, El Corte Inglés, Telefónica, Samsung, Coca Cola Europe, Cola Cao, WiZink, …).
Artificial Intelligence for Skeleton-based Biological Profiling and Forensic Human Identification
Skeleton-based forensic identification methods carried out by anthropologists, odontologists, and pathologists represent the first step in every human identification (ID) process and the victim’s last chance for identification when DNA or fingerprints cannot be applied. They include methods as biological profiling (BP), comparative radiography (CR), craniofacial superimposition (CFS), and comparison of dental records. BP involves the study of skeletal remains to find characteristic traits (age, sex, stature, and ancestry) that support determining the identity of the individual. It plays a crucial role in narrowing the range of potential matches during the process of ID, prior to the corroboration by any ID technique. Meanwhile, CFS aims to overlay a skull with some ante-mortem (AM) images of a candidate in order to determine if they correspond to the same person.
However, practitioners still follow an observational paradigm using subjective methods introduced many decades ago; namely, oral description and written documentation of the findings obtained and the manual and visual comparison of AM and post-mortem (PM) data. Designing systematic, automatic ad trustworthy methods to support the forensic anthropologist when applying BP, CFS, CR, and odontogram comparison, avoiding the use of subjective, error-prone and time-consuming manual procedures, is mandatory to enhance forensic ID. The use of artificial intelligence technologies (in particular computational intelligence methods (evolutionary algorithms, fuzzy sets, and deep learning); computer vision (image segmentation and processing, as well as 3D-2D image registration); and explainable machine learning) is a natural way to achieve this aim. This plenary talk is devoted to present some hybrid artificial intelligence systems for CFS, and skeleton-based age-at-death and sex assessment developed in collaboration with the University of Granada’s Physical Anthropology Lab within an eighteen years long research project. Some of those systems are protected by international patents, exploited by Panacea Cooperative Research in the Skeleton-ID software, which is under commercialization in different countries.
Michał Woźniak is a professor of computer science at the Department of Systems and Computer Networks, Wroclaw University of Science and Technology, Poland. His research focuses on machine learning, compound classification methods, classifier ensembles, data stream mining, and imbalanced data processing. Prof. Woźniak has been involved in research projects related to the abovementioned topics and has been a consultant for several commercial projects for well-known Polish companies and public administration. He has published over 300 papers and three books. Prof. Woźniak was awarded numerous prestigious awards for his scientific achievements as IBM Smarter Planet Faculty Innovation Award (twice), or IEEE Outstanding Leadership Award, and several best paper awards from prestigious conferences.
Continual learning – how to make this task effective
Lifelong Machine Learning (LLML) can overcome the limitations of statistical learning algorithms that need many training examples and are suitable for isolated single-task learning. Key features that need to be developed within such systems to benefit from prior learned knowledge include feature modeling, knowledge retaining from past learning tasks, knowledge transfer to future learning tasks, previous knowledge updates, and user feedback.
Also, the concept of task that appears in many formal definitions of lifelong ML models seems hard to define in many real-life setups because it is often difficult to distinguish when a particular task finishes and subsequent starts. One of the main challenges is the stability and plasticity dilemma, where the learning systems have to trade-off between learning new information without forgetting the old one. It is visible in the catastrophic forgetting phenomenon, defined as a complete forgetting of previously learned information by a neural network exposed to the new information.
The talk will focus on the main approaches to Lifelong Machine Learning from the classifier learning perspective. It will also discuss the open challenges and limitations in this domain.
Prof. Hujun Yin is a Professor of Artificial Intelligence at the University of Manchester. He is also the head of Business Engagement in AI and Data for the Faculty of Science and Engineering. His research areas include AI, machine learning, deep learning, signal/image processing, pattern recognition, time series modelling, bio-/neuro-informatics, and interdisciplinary applications. He has supervised over 30 PhD students and published over 200 peer-reviewed articles. Prof. Yin has received over £5 million funding from UK research councils, EPSRC, BBSRC, Innovate UK and industries across 30 projects. Many of his projects involve industries and local SMEs in developing cutting edge AI solutions to real-world problems, from recycling automation, precision agriculture to medical diagnosis. He has served or has been serving as an Associate Editor for IEEE Transactions on Neural Networks, IEEE Transactions on Cybernetics, IEEE Transactions on Emerging Topics in Computational Intelligence, and the International Journal of Neural Systems. He has also served as the General Chair or Programme Chair for a number of international conferences in AI, machine learning and data analytics. He is a member of the EPSRC Peer Review College, a senior member of the IEEE, and a Turing Fellow of the Alan Turing Institute.
A Manifold View of Deep Learning
AI has abruptly landed in the public domain, creating both excitement and fear. Before its seemingly sudden emergence, researchers have long been working on advanced learning methods and effective and efficient ways for dealing with or interpretating large amount of data of increasing complexity, dimensionality and volume. Whether it is in biology, social sciences, engineering, robotics or computer vision, data is being sampled and cumulated in unprecedented speed and scale. Systematic and automated ways of representing and modelling data for classification or recognition are becoming a great challenge. While deep learning has become the mainstream methodology for many data-driven machine learning tasks, esp.in vision, with abundant deep network architectures being developed, it’s largely unclear how it comes to making decisions. Making sense of deep learning in data and feature spaces with the manifold concept can help elucidate the underlying relationships that the learning is uncovering and reveal its possible shortfalls or instabilities. Manifold concept plays an important role in data representations, not only because of the manifold hypothesis, but also its hidden basis for formulating the learning tasks. How well a deep network or its feature maps can capture the intrinsic properties of data will determine the capability and performance of the network. Crude, enduring training may not always guarantee a good representation. Instead, organised feature maps can help optimise and explain outcomes of the network. Examples and case studies will be used to illustrate the manifold concept in a wide range of data-driven methods and learning techniques.