Home | Call for Papers | Call for Workshops | Call for Competitions & Datasets
Conference Committee | Contact | Accepted Workshops | Accepted Papers
Competitions & Datasets | Accommodation | Registration | Keynote Speakers | Schedule
4th ACM International Conference on AI in Finance (ICAIF-23)
Accepted workshops are outlined on this page. The workshops will be conducted on the first day of the conference on November 27, 2023.
Accepted Workshops
| Workshop Title | Timing | Workshop Info |
|---|---|---|
| NLP and Network Analysis in Financial Applications | Morning | See Details |
| Machine Learning for Investor Modelling and Recommender Systems | Morning | See Details |
| Transfer Learning and its Applications in Finance | Morning | See Details |
| AI in Africa for Sustainable Economic Development – Opportunities and Challenges of Generative AI (LLMs) | Morning (Virtual) | See Details |
| Synthetic Data for AI in Finance | Afternoon | See Details |
| AI Safety and Robustness in Finance: Will AI Make or Break the Next Generation Financial Systems? | Afternoon | See Details |
| Explainable AI in Finance | Afternoon | See Details |
| Women in AI and Finance Workshop + Reception & Networking | Evening | See Details |
AI Safety and Robustness In Finance: Will AI Make or Break the Next Generation Financial Systems?
Overview
On April 23, 2013, the Associated Press Twitter account tweeted at 1:07 pm: “Breaking: Two Explosions in the White House and Barack Obama is injured”. The account, which had around 2 million followers on Twitter, was hacked. Within seconds of the tweet’s release, virtually all U.S. markets plunged on the false news and went into a spiral that required intervention. The S&P 500 dropped 14 points to as low as 1,563.03 in about five seconds. The Dow Jones Industrial Average temporarily dropped 143.5 points, around 0.98 percent. Reuters data showed the tweet wiped out $136.5 billion of the S&P 500 index’s value within minutes. This attack showed one of the earlier signs of the dangers of automation without safety measures in the financial services, as well as the numerous feedback loops and the resulting impact in the blink of an eye. In May 2023 a similar event happened with an AI generated image of a fake explosion at the Pentagon. U.S. Stock Markets went into a downward spiral before circuit breakers kicked in.
Artificial intelligence solutions have a rapidly growing list of use cases in financial services. These applications range from customer service solutions and personal financial assistants to sentiment-based trading systems and AI-based wealth management advisory. Compared to 2013, AI systems are built with more security measures and guard rails. However, AI safety, robustness, and ethics techniques are at their infancy compared to the AI systems themselves – progressing much slower. With the announcement of GPT-4, AI safety has gained renewed interest in 2023. Based on the advanced capabilities of GPT-4, a new list of applications emerged, along with concerns about how much damage AI can make if and when things go wrong. The recent progress in LLMs and generative AI systems has also fueled a significant increase in the use of AI for criminal purposes. AI-enabled financial criminals have reached unprecedented levels, from voice-synthesized financial scams to custom spear phishing attacks. ChatGPT and other LLMs enable generating massive numbers of social media posts and blogs to manipulate the markets, gaming autonomous AI models into crashing or making incorrect decisions. AI-generated malware and cyber threats pose significant risks to financial firms.
Furthermore, generative AI systems frequently exhibit “emergent behavior” phenomena of completely unforeseen behaviors and capabilities. The complexity of current AI solutions makes it nearly impossible for development teams to fully predict the resulting AI system characteristics and design guardrails. Such capabilities pose dangers to the financial system and the broader society if no safety measures are taken. Even though financial services are among the most advanced in AI ethics practices, the broader research on AI ethics beyond fairness and explainability requires novel paradigms.
In 2023, 1000 researchers and practitioners signed a petition to “Pause AI Research” temporarily. This petition called for a cessation of research on “all AI systems more powerful than GPT-4” until the development and implementation of shared safety protocols for AI. While the community debates the ramifications of stalling and not-stalling AI research, it is clear that AI safety, ethics, and robustness research is more needed than ever, as it impacts both current and future AI systems.
This workshop aims to tackle the emerging challenges rapidly developing AI solutions pose to the financial sector and to society. The workshop aims to focus on:
- Use of GPT-4 and LLMs in financial services
- Emerging use of LLMs in financial crime (from financial scams to market manipulation and spear-phishing attacks)
- Systemic risks the LLM capabilities pose to the financial markets
- Upcoming security challenges in AI in finance
- AI safety protocols for financial services applications
- AI robustness techniques and solutions
- Emergent behavior phenomena in AI
- Benefits and drawbacks of a potential “AI Pause”
- Role of Model Governance in regulating LLMs
- Advanced AI monitoring and regulation capabilities
- Ethical AI capabilities beyond “credit fairness”
Robustness, safety, and ethical behavior of AI systems have become primary concerns. They will likely have a significant role in the potential success of AI as well as the resulting progress (or failure) of an AI-guided society. The workshop will explore the current next-generation LLM and AI applications in finance. Analyze potential threats posed by criminal organizations through advanced AI use. Discuss developing industry and application-wide safety protocols, end-to-end AI robustness techniques, model monitoring and regulation tools, advanced AI ethics solutions, and other critical research areas.
The workshop will bring industrial researchers, practitioners, academics, and regulators together to discuss emerging trends, challenges, novel solution approaches, latest safety/ethics/robustness tools and technologies to advance the state-of-the-art safety, robustness, and ethical practices in the AI in the finance community.
Organizers
- Senthil Kumar, Head of Emerging Research, AI Foundations, Capital One
- Naftali Cohen, Senior Data Scientist at Schonfeld and Lecturer at Columbia University
- Eren Kurshan, Head of Research and Methodology, Morgn Stanley
- Ani Calinescu, Professor, Oxford University
- Terrence Bohan, Vice President, FINRA
- Gideon Mann, Head of ML Product and Research, Bloomberg
- Clark Barrett, Professor, Co-Director Stanford Center for AI Safety
- Paul Burchard, Managing Director, Goldman Sachs
- Christopher Policastro, Data Scientist, Bank of New York Mellon
- Yu Yu, Director of Data Science at BlackRock
Machine Learning for Investor Modelling and Recommender Systems
Overview
Recommender systems are an emerging tool in the realm of financial services. There is a growing need for recommender systems designed for investors that incorporate client demographics, preferences and behaviors, and be supported with proper policy and regulation to protect financial agents from inappropriate recommendations.
Current state-of-the-art recommender systems utilize a vast amount of user-item interaction data to infer users’ preferences. While it is essential to consider individual investors’ preferences, this should not be the only objective of the model. Financial recommender systems must consider the risk and return of a recommended asset or portfolio relative to the existing investor portfolio and preferences, as well as future predictions of the asset. This is mainly because financial products differ from other products and online services. For financial products, it is crucial to consider the risk and return of a recommended portfolio relative to clients’ risk tolerance and financial preferences. Further, some recommender systems may not be targeted only at investors, but also institutional investors such as financial advisors who require new tools to oversee a broad array of clients. Therefore, recommender systems for financial products represent a unique challenge compared to other recommendation domains.
Investor demographics and trading behaviours can directly influence the guidance recommender systems provide, and thus the accurate understanding of investors through modelling is paramount. Recommender systems utilising investor modelling can target different end-users, such as clients, financial advisors, dealerships, regulators, etc. Incorporating investor modelling into recommender systems designed for clients, households, advisors and dealerships is pivotal to the future of financial decision-making.
Currently, there is a gap between industry and academia in the area of ML in investor modelling and recommender systems. This workshop will bring together academic and industry participants working on state-of-the-art research in designing recommender systems that incorporate investor preferences, demographics, historical and predicted trading behaviors, and financial details. This workshop builds on previous workshops on machine learning for investor modelling where we identified recommender systems as an emerging area of quantitative behavioral finance. This workshop reaches more broadly to bring together researchers and industry participants working on new methodologies in AI and recommender systems and share their recent results in this fast-moving field.
Organizers
- Dr. Igor Halperin, AI researcher and a Group Data Science leader, Fidelity Investments
- Dr. Svitlana Vyetrenko, Executive Director, J.P. Morgan AI Research
- Mr. Thomas J. De Luca, Senior Researcher, Investor Behavior, Investment Strategy Group, Vanguard
- Prof. Alberto Rossi, Professor of Finance, McDonough School of Business, Georgetown University
- Prof. Yongjae Lee, Assistant Professor in the Department of Industrial Engineering, Ulsan National Institute of Science and Technology (UNIST)
- Prof. John R.J. Thompson (Lead Organizer), Assistant Professor, University of British Columbia
Explainable AI in Finance
Overview
Explainable AI (XAI) forms an increasingly critical component of operations undertaken within the financial industry, brought about by the growing sophistication of state-of-the-art AI models and the demand that these models be deployed in a safe and understandable manner.
The financial setting brings unique challenges to XAI due to the consequential nature of decisions taken on a daily basis. There are two encompassing dimensions to this: macro-financial stability and consumer protection. Financial markets transfer enormous amounts of assets on a daily basis. AI-powered automation of a substantial fraction of these transactions, especially by big players in key markets, poses a risk to financial stability if the underlying mechanisms driving market-moving decisions are not well understood. This may trigger a crisis-risking meltdown in the worst-case scenario.
At the same time, and just as important as macro-stability, is consumer protection. Automation within the financial sector is tightly regulated: in the US consumer credit space, the Equal Credit Opportunity Act (ECOA), as implemented by Regulation B, demands that explanations be provided to consumers for any adverse action by a creditor; in the EU, consumers have the right to demand meaningful information for automated decisions under the General Data Protection Regulation (GDPR). Safe and effective usage of AI within finance is thus contingent on a strong understanding of theoretical and applied XAI.
Currently, there is no industry standard consensus on which XAI techniques are appropriate to use within the different parts of the financial industry – or if indeed the current state-of-the-art is sufficient to satisfy the needs of all stakeholders. This workshop aims to bring together academic researchers, industry practitioners and financial experts to discuss the key opportunities and focus areas within XAI – both in general and to face the unique challenges in the financial sector.
Organizers
- Sanghamitra Dutta, PhD, Assistant Professor in Electrical and Computer Engineering at University of Maryland College Park USA
- Andreas Joseph, PhD, Senior Research Economist, Advanced Analytics Division, Bank of England, and Research Fellow at the Data Analytics for Finance and Macro (DAFM) Research Centre at King’s College London
- Jundong Li, PhD, Assistant Professor in Electrical and Computer Engineering at University of Virginia USA
- Saumitra Mishra, PhD, Vice President/AI Research Lead at J.P. Morgan. Member of XAI Center of Excellence at J.P. Morgan.
- Francesca Toni, PhD, Professor in Computational Logic at the Department of Computing, Imperial College London and Royal Academy of Engineering / J.P. Morgan Research Chair in Argumentation-based Interactive Explainable AI
- Adrian Weller, PhD, Director of Research in Machine Learning at the University of Cambridge and Head of Safe and Ethical AI at The Alan Turing Institute
AI in Africa for Sustainable Economic Development – Opportunities and Challenges of Generative AI (LLMs)
Overview
The global AI landscape has been drastically changed by the disruption caused by ChatGPT. This new development has the potential for advancing AI in Africa and changing the overall economic outlook of the continent. However, countries in Africa who hope to adopt this will have to innovate solutions to overcome issues involving limited availability and/or massive costs associated with the use of different large language models (LLMs) among others.
The application of AI in the Finance sector has seen much growth over the years prompting significant investments in these areas. These research efforts have been predominantly led by the developed world whose solutions might not fit well given the complexity in African nations. The AIA initiative is backed by a group of AI researchers whose focus is to extend these AI capabilities to the Finance sector and foster collaboration in Africa.
The main theme of the 2023 workshop will focus on Generative AI (LLM) opportunities with regards to the advancement of AI in Africa. Breakout sessions will be led by Experts to discuss various aspects with regards to Generative AI Applications. Panel discussion will focus on the good, the bad and the ugly concerning the adoption of Generative AI in Africa
Organizers
- Allan Anzagira, Senior AI Research Associate, J.P. Morgan AI Research
- Toyin Aguda, Senior AI Research Associate, J.P. Morgan AI Research
- Babatunde Sawyerr, Senior Lecturer, University of Lagos
- Kayode Olayele, Post-Doctoral Fellow, Data Science for Social Impact Lab, University of Pretoria
- Charese Smiley, AI Research Lead, J.P. Morgan AI Research
- Samuel Assefa, Head of AI, U.S. Bank
- Saheed Obitayo, Senior AI Research Associate, J.P. Morgan AI Research
- Samuel Mensah, Senior AI Research Associate, J.P. Morgan AI Research
Synthetic Data for AI in Finance
Overview
Synthetic data generation has emerged as a popular research area in both academic and industry research labs. The financial industry in particular has demonstrated strong interest due to the highly regulated nature of the business and sensitivity of individual financial information. The hope of synthetic data is enabling internal and external collaborations through the sharing of realistic, but privacy-preserving synthetic data, currently impossible due to legal requirements and internal policies. These collaborations open up possibilities of improved customer experiences and protections (e.g. against fraud) at financial institutions. Many questions surrounding synthetic data, however, remain: (i) privacy guarantees and their robustness to attacks, e.g. membership inference, (ii) fairness implications when utilizing synthetic data, (iii) how to assess quality, utility and diversity of synthetic data. Each must be interpreted in light of specific technical, legal and practical challenges when working with sensitive financial and healthcare information about individuals. The goal of this workshop is to bring together researchers from academia and practitioners and regulators from these industries to understand the evolving landscape and serve as a venue for cross-pollination between academic research and practical experience dealing with challenges of using synthetic data in industry. Our main goals are to develop understandings of the most important open problems, current methods and their limitations, and establish a series of cross-disciplinary good practices.
Organizers
- Rachel Cummings, Associate Professor of Industrial Engineering and Operations Research at Columbia University
- Giulia Fanti, Assistant Professor of Electrical and Computer Engineering at Carnegie Mellon University
- Guang Cheng, Professor of Statistics and Data Science and Director of the Trustworthy AI Lab at UCLA
- Robert E. Tillman, Research Director at Optum Labs, the R&D arm of UnitedHealth Group
- Vamsi K. Potluru, Research Director at JP Morgan AI Research
Transfer Learning and its Applications in Finance
Overview
In 1976, S. Bozinovski and A. Fulgosi published one of the earliest paper on transfer learning in neural networks, “The influence of pattern similarity and transfer learning upon training of a base perceptron b2” (in Croatian originally) in the Proceeding of Informatica. They provided a mathematical model for this particular transfer learning problem and explained both “positive” and “negative” transfer from a geometric point of view. Since then, transfer learning has been widely adopted in cognitive science and computer science. The recent rise of Large Language Models, to a great extent, relies on various transfer learning techniques. Besides, transfer learning has also been gaining its popularity in the field of finance. In the meantime, researchers never stop the study on “positive” and “negative” transfer and have proposed various theoretical insights and quantitative measurements. In this workshop, we will present these current studies on transfer learning and introduce some of its applications in finance.
Organizers
- Haoyang Cao, Centre de Mathématiques Appliquées (CMAP), École Polytechnique
- Haotian Gu
- Xin Guo, University of California, Berkeley
NLP and Network Analysis in Financial Applications
Overview
Applications of NLP and Network Science in finance have received tremendous attention within the last decade. An increasing number of areas from the applied finance are successfully leveraging and blending tools from NLP, network analysis and graph machine learning for tasks ranging from asset pricing, portfolio construction, and risk management, to understanding large scale supply chain networks, market crashes and fraud detection. We will engage in in-depth discussions on critical subjects, including information retrieval and extraction techniques tailored for financial texts, trend prediction methodologies utilizing text data, the potential utilization of large language models in financial analysis, and the progress of generative NLP for finance applications.
This workshop is a continuation of 2 past iterations at ICAIF (’21 and ’22) and aims to illustrate the broad interplay between these techniques and analysis tools in the context of financial applications, showcasing a suite of problems of interest to both researchers and practitioners.
In addition to attracting high quality research contributions to the workshop, one of the aims of the workshop is to mobilize the researchers working on the related areas to form a community. The workshop will also provide a platform to exchange ideas and foster further interdisciplinary research collaborations among researchers.
Organizers
- Leman Akoglu, Dean’s Associate Professor of Information Systems, Carnegie Mellon University
- Ivan Brugere, AI Research Scientist, J.P. Morgan
- Mihai Cucuringu, Associate Professor, University of Oxford and Turing Fellow
- Xiaowen Dong, Associate Professor, University of Oxford
- Saurabh Nagrecha, Applied ML Manager, Google
- Stefan Zohren, Associate Professor, University of Oxford, Turing Fellow and Principal Quant at Man Group)
- Lukasz Szpruch, Professor, University of Edinburgh
- Mark Klaisoongnoen, PhD Candidate, EPCC at the University of Edinburgh
- Claire Barale, PhD Candidate, University of Edinburgh
Women in AI and Finance
Overview
Our objective at the Women in AI and Finance workshop is to cultivate a diverse and inclusive community, equipping individuals to navigate the ever-evolving intersection of AI and Finance.
In today’s increasingly AI-driven world, where applications span various sectors, the financial industry emerges as one of the most transformative domains. However, despite the progressive nature of this field, women often face unique challenges that can hinder their career growth.
Our workshop is dedicated to empowering early-career professionals by offering an engaging platform for learning and networking. We place a strong emphasis on mentorship and fostering a supportive, collaborative network. We bring forth insights from successful women leaders in the finance industry. Attendees will have the opportunity to participate in open discussions about career challenges and opportunities, learn from others’ experiences with mentorship, and establish meaningful professional connections during dedicated networking sessions.
Organizers
- Zhen Zeng, Research Lead, J.P. Morgan AI Research
- Tingting (Rachel) Chung, Clinical Associate Professor, Raymond A. Mason School of Business
- Rachneet Kaur, Research Scientist, J.P. Morgan AI Research
- Guiling (Grace) Wang, Distinguished Professor and Associate Dean for Research
New Jersey Institute of Technology