From ML Miracles to AI Trash: Taming the Bullshit in LLMs
I’ve been preaching the gospel of Large Language Models for years now. Got ChatGPT API into production in March 2023, spoke at EuroPython 2023. But lately, I’m overwhelmed by pointless AI-generated content—the era of Bullshit AI—where powerful tools are misused to churn out meaningless noise.
- timeslot: Monday 7th April 2025, 14:00-15:00, Room D
- tags: AI
I’ve been hooked on Large Language Models (LLMs) before they were trendy—using them in my Genomics and Genetics research (with popular DNA benchmark https://github.com/ML-Bioinfo-CEITEC/genomic_benchmarks), pushing ChatGPT API into production of media monitoring app in March 2023, raving about their potential at EuroPython 2023 and ML Prague 2024 conferences, and teaching both university students and corporate non-techies about them. But lately, I’ve noticed something rotten: AI that once felt magical is now churning out spam, clogging screens, and—worst of all—creeping into projects I’ve prototyped myself. I call it Bullshit AI.
Why does a tool so easy to use and brimming with potential turn into a productivity sinkhole? In this talk, I’ll dive into the mess with real-world examples of AI gone wrong. Then, I’ll flip the script—showing how to wield LLMs right. We’ll explore:
- Pair programming with AI that doesn’t suck (Cursor is my new rubber duck)
- Running local, open models on your own hardware - Fine-tuning for niche languages or tasks—when it’s worth it, and when it’s not
- The latest LLM development: thinking models, tool usage, and agent frameworks
Expect a mix of war stories, practical tips, and a plea to stop the bullshit before it eats the world.
Biostatistician by training, time series forecaster at Google, “gradient boosting guy” at Simple Finance. Currently, entangling knots on protein backbones at Masaryk University and taming llamas and other LLM creatures at Mediaboard.
