Skip to content
scieneers
  • Story
  • Services
    • Services overview
    • AI for the energy industry
    • Large Language Models – Talk to your data
    • Expandable AI-Chat-Base
    • Social Impact @ scieneers
  • Workshops
    • Microsoft Data Strategy & Analytics Assessment
    • Azure Data Platform Proof of Concept
    • Microsoft Fabric: “Bring Your Own Data” Workshop
    • Power BI Training
    • Data Science Training
    • Data Security Workshop
  • Content
  • Team
  • Join
  • Contact
  • DE
  • EN
  • Menu Menu

PyData Berlin 2025

Data science, open source and agentic AI in focus

10.09.2025 – ca. 13 Min. Lesezeit – Zurück zur Startseite – Alle Blog-Artikel

PyData Berlin 2025 was three days packed with talks, tutorials, and tech community spirit.

From 1 to 3 September, data scientists, data engineers, and open-source enthusiasts gathered at the bcc Berlin Congress Centre to exchange ideas about the latest data analysis and AI developments. With around 500 participants, the conference was the perfect size: large enough to offer a variety of perspectives, yet small enough to facilitate genuine conversations.

What is particularly impressive is that many open-source initiatives presented new or improved tools in the central areas of data processing, visualisation, and app development.

Agentic AI

as a leitmotif

Another common theme that emerged from many of the presentations was that agentic AI use cases are ubiquitous.

However, the fundamental challenge remains: How can LLMs be kept under control?

Numerous discussions and presentations made it clear that customised evaluations and metrics are often key to success.

Our contribution: LiteLLM in Production

We, as scieneers, gave a presentation in which Alina Dallmann shared our experiences of using LiteLLM in productive environments.

The title was:
‘One API to Rule Them All? LiteLLM in Production” references the concept of utilising LiteLLM to develop a unified interface for an array of LLM providers.

One API to Rule Them All?

LiteLLM in Production

A woman stands before a group, presenting ideas and engaging the audience with her presentation
A woman at a podium presents to an audience, with a banner showing different companies behind her

Alina giving a presentation on LiteLLM in Production

Nowadays, many AI applications work with different language models.
Today, it might be one from OpenAI, and tomorrow, it might be one from Anthropic. The day after that, it might be a self-hosted model.

The requirements are precise:

  • Models should be easily interchangeable and updatable.
  • It should be possible to switch between different providers with minimal effort.

This is precisely the area in which LiteLLM excels. It provides a unified API for many models, either as a Python library or as a proxy server with an OpenAI-compatible endpoint.

Browse through the presentation

Download

Our lessons learned from PyData 2025

  • Quick start: LiteLLM is easy to deploy using Docker and YAML configuration.

  • Cost control: Budgets can be managed centrally with automatic tracking of model costs.

  • Limits of abstraction: Very specific requirements, such as temporary budget increases for individual users, require workarounds.

  • Production readiness: Not all new features are immediately usable without restrictions, as they may still exhibit unexpected behaviour.

Authors

Alina Dallmann

Alina Dallmann, Data Scieneer at scieneers GmbH
alina.dallmann@scieneers.de

Dr. Richard Naab, Data Scieneer at scieneers GmbH
richard.naab@scieneers.de

© Copyright scieneers – Impressum | Datenschutz
Scroll to top Scroll to top Scroll to top