Archives: Agenda
Panel Discussion: Empowering Women and Diversity in Tech
Fireside Chat: Anticipating AI Strategies and Holistic Outlooks For 2026/27
- AI Action Plan – What opportunities will open for AI Diplomacy?
- Guiding Financial Services industry to strengthen AI risk and resilience
- Strategies for enhancing fraud detection and prevention
- How will AI data centres affect financial services industry? Will data management become more efficient?
Speaker, NatWest
THEME 2: Societal Impacts of AI
Networking Break
Mastering The Arts of Data and Models
- Increasing data privacy risk amidst a global AI race
- How should organisations be proactively maintaining data commitments and obligations of AI within models?
- To what extent have we successfully implemented models into business products?
Fireside Chat: AI Agents in Financial Services: A look into the future
- How can we gather our data to pivot true innovation?
- Payment and consumer behaviour – what are we seeing?
Research: How Can We Minimize GenAI Model Hallucinations?
While large language models (LLMs) have been a major enabler in the rise of AI – and a major disruptor across industries and use cases – their adoption has been more limited in financial services, and for good reasons. In a highly regulated environment that demands accountability and precision, LLMs lack refinement for enterprise use cases. With an emphasis on huge models with large and diverse datasets, there is a strong potential for hallucinations that affect both customer outcomes and portfolio risk. Detecting and managing hallucinations is difficult because LLM algorithms are not interpretable and users lack the knowledge to challenge them.
Responsible AI practices emphasize extreme knowledge of data utilized in AI development, and FICO has applied this practice to the development of focused language models (FLMs). A focus on very specific data orchestration for training ensures appropriate high quality and high relevance data is chosen; later, this same focused data is used to build both domain and task FLM from scratch ensuring that models correctly focused on a task at hand.
The FLM approach is distinctly different from commercially available LLMs and SLMs, which offer no control of the data used to build the model; this capability is crucial for preventing hallucinations and harm. A focused language model enables GenAI to be used responsibly because:
- It affords complete transparency and control of appropriate and high-quality data on which a core domain-focused language model is built.
- On top of industry domain-focused language models, task-specific focused language models with tight vocabulary and training contexts can be developed for specific enterprise tasks.
- Further, due to the transparency and control of the data, the resulting FLM can be accompanied by a trust score with every response, allowing risk-based operationalization of Generative AI; trust scores measure how responses align with the FLM’s domain and/or task knowledge anchors (truths).
How can focused language models (FLM) overcome the enterprise challenges of LLMs and be a game changer for a wide variety of decisions? In this presentation, FICO will present its groundbreaking research into this concept and illustrate how it can benefit both lenders and consumers
Living In A Data Sovereign World
- Steps to overcome data sovereignties and regulatory turmoil
- How should financial institutions adapt in foreign data policies and strategise ROI initiatives?
- How are we defining tools and frameworks into our business and strategies?