Artificial Intelligence is changing industries fast. This shift is powered by large language models, smart agent systems, and closer human–AI teamwork. From voice assistants that catch subtle meaning to AI teams solving big tasks, we’re moving from single models to connected, smart systems.
In today’s whirlwind of rapid change, it’s easy to feel lost-unsure where to learn, who to follow, or how to truly understand what’s unfolding around you. The real challenge? Finding the right people- those quietly shaping the future in niche, cutting-edge fields. And finding them all in one place? That’s almost unheard of.
Until now.
We’re hosting a one-stop experience designed to cut through the noise. A space where industry experts hold deep-dive hack sessions, offering clarity, direction, and real-world insights you won’t find in scattered blogs or endless videos.
You show up. We make it worth your time- and your investment. Simple as that.
DataHack Summit 2025 brings together global experts shaping this future. The sessions mix deep technical ideas with real-world problem statements. Whether you’re an engineer, AI professional, product leader, or just an AI enthusiast, these sessions will show you where the field is heading.
This conference is more than just talks and keynotes. It’s a mix of bold ideas, real connections, and hands-on learning. With speakers from big giants like Microsoft, NVIDIA, AG2, and more, you’ll get inside views on AI models, safe autonomous systems and how humans and AI can create together.
Here are ten hack sessions that offer both fresh thinking and insightful takeaways you can use right away if you choose to attend these sessions.
Voice is quickly beating screens as the easiest way to use tech. This session explores how new voice agents blend generative AI, speech tech, and real-time synthesis to make smooth interactions. Speakers share real use cases, from bots handling tough questions to health assistants helping patients. You’ll learn how system design affects the speed and natural flow of the agents. By the end, you’ll know how to build voice apps that feel more human.
You’ll also see how AI models and speech tools come together to power modern voice agents. The session shows how smart system design keeps answers fast and accurate. You’ll hear stories from customer support and healthcare, where voice AI is making a difference. Finally, you’ll get tips for training agents to sound more real, helping you create better, more human conversations.
Speakers
Attention is what helps large language models focus on the right words at the right time. This session breaks it down with simple diagrams that show how query, key, and value matrices work. You’ll see visual examples, like gravity models- that help make these complex ideas easy to grasp. The session explains why attention is key to coherence and keeping context. It’s great for anyone who wants to really understand one of AI’s most useful tools.
By the end, you’ll know how attention helps generate fluent, relevant text, and how queries, keys, and values shape what the model focuses on. Visual comparisons make the math easier to understand. You’ll build a mental model that helps you debug and fine-tune LLMs more easily.
Speaker
Luis Serrano – Founder and Chief Education Officer, Serrano Academy
Think of LLMs not as lone workers, but as a team-each agent has a task and skill. This session shows how multi-agent systems divide tasks, improve flows, and solve problems that one model can’t. You’ll learn why single LLMs struggle with complex jobs and how tools like CrewAI help agents plan, act, and check results. Real use cases show how these systems adapt when goals change. You’ll leave ready to build teams of AI agents that can grow and work together.
You’ll also learn the difference between single models and agentic systems, and how to split jobs across specialized agents. The session shows you how to use CrewAI to manage these agents and shares examples that prove how scalable and flexible this setup is.
Speaker
Alessandro Romano – Senior Data Scientist. Kuehne+Nagel
Trusting AI blindly can lead to big mistakes, especially when the stakes are high. This session looks at the key question: “how sure is sure enough?” You’ll see simple neural nets make bold mistakes. Then, you’ll learn two ways to measure how confident models are-Bayesian and ensemble methods. Finally, the session shows how to use these tools in larger models like LLMs. You’ll walk away with tools to make your AI more honest and safer.
The talk explains why AI confidence matters, and walks through two strong ways to measure it. You’ll see how to use them on both small and big models. It also shares ideas for putting these tools into real systems so your AI works better and safer in the real world.
Speaker
Joshua Starmer – Founder and CEO, StatQuest
Smart agents can act, think, and sense but they also need rules. This session shows how to add fairness, openness, and control into agentic AI. You’ll dive into real issues like goals going wrong and weird behaviors that weren’t planned. Case studies show how to track agents’ lives and enforce safety rules. By the end, you’ll know how to build AI that not only works but also follows values we care about.
You’ll also learn the core ethical ideas that guide agentic systems and how to avoid common problems like system drift or misbehavior. The session gives you tools to monitor agents in real-time and build systems that stay fair and clear.
Speaker
Praveen Kumar GS – Senior Director, Samsung Research
Even smart agents need help from people when things get tricky. This session shows how combining human planning with AI action can boost success. You’ll look at system designs that mix human thinking with agent speed, using tools like LangGraph and Pydantic-AI. Real examples show how human checks catch errors before they grow. You’ll leave with solid plans for building these hybrid systems.
The session dives into how to mix human decisions into AI systems, and gives an overview of tools like LangGraph and Pydantic-AI. You’ll see how human guidance reduces agent mistakes, and learn good ways to build systems that balance both sides.
Speaker
Deepak Sharma – Senior Machine Learning Engineer, DeepMind
AI isn’t here to replace creators-it can assist them in expanding creative horizons. This session shows ReAct-style setups where people start ideas and AI builds first versions. You’ll learn how to use open tools like Stable Diffusion, LoRAs, and IPAdapters to make better content. The session also shows how to add feedback loops that use scoring to refine your results. Whether you’re in design, film, or ads, you’ll learn how to mix your ideas with AI’s power.
You’ll build ReAct workflows for teamwork between humans and AI, use tools like Stable Diffusion and LoRAs, and learn how to score and improve results. This session helps you scale your creative work using smart AI pipelines.
Speaker
Dhruv Nair – Machine Learning Engineer, Hugging Face
Modern cities need transport systems that can react and adapt fast to different situations. This session shows how multi-agent AI can adjust routes and schedules in real time. You’ll see how Microsoft’s setup lets agents spot issues, offer new routes, and keep passengers informed. You’ll also learn design tips for systems that stay strong under pressure. If you care about smart city travel, this session has real-world plans you can use.
You’ll learn how to build agents that keep watch over transit, how to react to problems as they happen, and how to alert people right away. It also covers Microsoft tools that make these systems scalable and reliable.
Speaker
Manpreet Singh – Data & Applied Scientist II, Microsoft
For agents to work together, they need a common language to talk. The Agent-to-Agent (A2A) Protocol sets the rules, helping agents share goals, updates, and choices. This session covers the protocol basics and shows how agents team up, solve problems, and raise flags when needed. You’ll also get tips on how to use this setup in your systems.
You’ll learn what the A2A Protocol is, how it makes agent teamwork easier, and how to use it to keep your systems simple and future-ready. The talk includes examples of workflows and clear steps for smooth integration.
Speaker
Avinash Pathak – Senior AI Engineer, NVIDIA
Discover how Agentic Knowledge Augmented Generation (KAG) takes RAG to the next level! While RAG improves models by retrieving documents, KAG uses intelligent agents to build and navigate knowledge graphs for deeper insights. In this session, explore techniques to transform unstructured text into structured knowledge graphs and integrate them with graph databases for richer model inputs.
See firsthand the limitations of traditional RAG and how KAG bridges those gaps. The workshop covers end-to-end workflows, including graph construction, database linking, and agent-based reasoning—all powered by LangGraph. Learn to build AI systems that don’t just retrieve information but truly understand context by leveraging dynamic knowledge graphs.
By the end, you’ll know how to deploy multi-agent systems that analyze interconnected data sources, unlocking smarter, more interpretable AI solutions
Speaker
Arun Prakash Asokan – Associate Director- Data Science, Novartis
DataHack Summit 2025 is more than just an AI conference- it’s where the future of AI takes shape. These sessions don’t just teach; they explore what’s next. With speakers from top companies and real-world use cases across industries, you’ll get a front-row look at how AI is solving complex problems today and where it’s heading tomorrow.
In this opportunity to join such insightful sessions in person, you’ll leave with more than just notes. You’ll shift how you think about human–AI collaboration, how models can evolve, and how smart systems can be both useful and responsible. This Summit isn’t about watching trends, it’s about being part of them. If you want practical tools, fresh ideas, and a peek at what’s coming next, this is where you’ll find it.