This page collects papers, books, articles, videos, and other resources suggested by our community for potential future discussions. We welcome suggestions that align with our focus on impactful and seminal machine learning content.
Current Suggestions
- The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity (discussing November 2025)
 - AI as Normal Technology (discussed October 2025)
 - On the Theoretical Limitations of Embedding-Based Retrieval (discussed September 2025)
 - Energy-Based Transformers are Scalable Learners and Thinkers
 - Angles Don’t Lie: Unlocking Training-Efficient RL Through the Model’s Own Signals
 - Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation
 - Text-to-LoRA: Instant Transformer Adaption
 - Poisoning Attacks on LLMs Require a Near-constant Number of Poison Sample
 - Petri: An open-source auditing tool to accelerate AI safety research
 - Holistic Agent Leaderboard: The Missing Infrastructure for AI Agent Evaluation
 - Alignment Faking in Large Language Models
 - MolmoAct: Action Reasoning Models that can Reason in Space
 
How to Suggest Papers
Have a paper, book chapter, news article, lecture, or other resource you think would spark great discussion? You can:
- Contact the organizer directly
 - Submit a suggestion via GitHub issue
 - Bring it up during our monthly meetings
 
We look for content that is impactful, thought-provoking, or offers practical insights into how ML/AI works in practice, regardless of format or technical complexity.
Previously Suggested Papers
The following were suggested by community members during our earlier meeting phases:
- Solving olympiad geometry without human demonstrations
 - Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
 - TOFU: A Task of Fictitious Unlearning for LLMs
 - Quantifying the impact of uninformative features on the performance of supervised classification and dimensionality reduction algorithms
 - Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
 - A Mulching Proposal
 - Evaluating and Mitigating Discrimination in Language Model Decisions
 - Dive into Deep Learning: Coding Session #4 Attention Mechanism I (MLT Artificial Intelligence)
 - Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
 - MiniLLM: Large Language Models on Consumer GPUs
 - The TinyLlama project
 - On the Opportunities and Risks of Foundation Models
 - Challenges in Deploying Machine Learning: a Survey of Case Studies
 - Machine Learning and the Future of Bayesian Computation