Artificial Intelligence (AI) Working Group
Cutting edge AI content for IR professionals
Artificial Intelligence (AI) has been used in investment management for decades, but its ability to transform productivity and performance in the financial sector and more widely across the economy, leapt forward with the arrival of ChatGPT and the proliferation of AI tools that followed in 2022.
In response to these rapid developments, the IR Society convened an AI Working Group, comprised of experts from within our membership, to ensure that we provide our members with up-to-date best practice and actionable insights relating to their use of AI.
AI Guidelines
The working group has created a set of guidelines...
AI Guideline Documents
AI Resources
Access all of the Society's AI-related resources through the navigation cards below
AI Resources
AI FAQs
AI (Artificial Intelligence) refers to any system that performs tasks we associate with human intelligence—recognizing patterns, understanding language, making predictions—by learning from data rather than being explicitly hard‑coded for every rule. “AI” ranges from simple classifiers to massive multimodal models and agents that call external tools.
Generative AI refers to models that learn the statistical structure of data and then sample new content from that distribution—text, code, images, audio, video. They power everything from chatbots and design tools to data augmentation and simulation, often guided by prompts or control signals.
Hallucination is when a model confidently generates content that looks plausible but is factually wrong or entirely made up—an artifact of predicting the “next likely token” rather than verifying truth. Techniques like retrieval grounding and stricter prompting help reduce (not eliminate) it.
A Large Language Model is a neural network with billions (or trillions) of parameters trained on massive text corpora to predict the next token. With scale and instruction tuning, LLMs can follow prompts, reason over long context, write code, and act as general-purpose text (and often multimodal) interfaces.
A stack of interconnected “neurons” (simple math units) whose connection weights are learned from data. Forward passes transform inputs through layers; backpropagation adjusts weights to reduce error. Depth and width let networks approximate extremely complex functions, at the cost of large datasets and compute. Despite the biological metaphor, today’s nets are engineered math, not mini-brains.