Agenda

          
RBCN26 Online

Workshops

3 Mar

Start your RoboCon week with live, instructor-led workshops focused on hands-on learning.

In small, interactive groups, you’ll dive into real-world Robot Framework exercises and get direct guidance from expert facilitators. Sessions are either 4 h or 8 h long and sold separately from the main conference tickets.

Main Conference

4-5 Mar

Two full days of inspiring talks from speakers around the globe, covering the entire Robot Framework universe. After each live-streamed talk, our host Joe Colantonio will lead a live Q&A with the speaker.

This isn’t a typical online conference — it’s in Gather.Town, a virtual world where you can walk around, meet peers, and spark spontaneous conversations.

Tutorials

6 Mar

Our live, two-hour tutorials give you a practical, step-by-step walkthrough of a specific topic. They are interactive and online, so you can follow along, ask questions, and make use of shared resources (often a GH repo or similar).

Watch Parties

4-5 Mar

Gather locally in Watch Parties and enjoy the RoboCon talks together. Depending on the setup, this might include a hands-on workshop, a tutorial, or just casual drinks and food. Some parties may run in the morning or evening, before or after the main program.

Community Time

5 Mar

Join us in Gather.Town after the first conference day for some relaxed community fun. Meet with your avatar, chat about Robot Framework (or anything else), and enjoy games and activities together.

Start time and program TBA.

Community Day

6 Mar

This is your day to set the agenda. Community Day is a free “unconference,” where attendees propose and vote on topics at the start. It’s a vibrant, hands-on space for sharing ideas, learning, and getting direct help from experts in the ecosystem.

Workshops

Integrating AI & Robot Framework
🔗
03 Mar 08:00 am
David Fogl

Learn how to integrate AI models like ChatGPT and Gemini with Robot Framework. Build Python libraries, generate test data and documentation, and explore generative AI for smarter test automation. Hands-on, practical, and focused on real-world testing workflows.

Workshop Goal:
This workshop will show you how to integrate generative AI (OpenAI and Gemini) with Robot Framework. Together we will create a simple Python library, connect it with Robot Framework, and explore how LLMs can boost your testing workflows.

Key Objectives:

  • Learn how to connect Robot Framework with modern AI models
  • Generate test documentation and keyword descriptions automatically
  • Create dynamic test data on the fly
  • Experiment with working files as context for AI
  • Understand how to extend Robot Framework with custom Python libraries

Knowledge Level:
Basic Python coding skills and prior experience using Robot Framework. You should be comfortable writing simple functions in Python and running Robot Framework tests. This workshop is aimed at intermediate automation testers and developers who want to explore AI-powered testing.

Workshop Agenda (09:00–17:00):

  1. 09:00 – 09:30: Introduction – AI in testing, OpenAI and Gemini models, integration options
  2. 09:30 – 11:00: Building a Python Library – Live coding session to create a simple AI-powered library
  3. 11:00 – 12:00: Integrating with Robot Framework – Connecting the library to Robot Framework and running tests
  4. 12:00 – 13:00: Lunch Break
  5. 13:00 – 14:30: Working with Generative AI – Generating test data, documentation, and test cases dynamically
  6. 14:30 – 16:00: Hands-on Exercises – Extending the examples and experimenting with custom ideas
  7. 16:00 – 17:00: Wrap-up & Q&A – Best practices, limitations, and discussion

Preparation and Technical Requirements:

  • Installed Python 3.10+
  • Installed Robot Framework
  • Basic IDE or editor (PyCharm, VSCode, etc.)
  • API keys for OpenAI and Gemini (will be provided by us for the workshop day)
Handling and Testing Cloud-Native Applications
🔗
03 Mar 08:00 am
Nils Balkow-Tychsen

In this hands-on workshop, you'll learn how to deploy a cloud-native application to a Kubernetes cluster and make it accessible—all while gaining practical experience with tools like kubectl, Helm, and Terraform.

Once the application is up and running, we'll take it a step further: turning our manual steps into automated tests using Robot Framework. You'll get to know two powerful libraries—KubeLibrary for interacting with Kubernetes, and TerraformLibrary for managing infrastructure as code.

Workshop Description In this workshop, you'll learn how to effectively manage and test cloud-native applications with a focus on the layers beneath the application itself—deployment, runtime, and configuration.

We'll explore essential tools in the cloud-native toolbox, including kubectl, Helm, and Terraform, to understand how applications are deployed and run on Kubernetes.

Once we've established a working application in the cluster and a good grasp of its environment, we'll shift our focus to automation. You'll learn how to use Robot Framework, along with the KubeLibrary and TerraformLibrary, to turn your manual deployment and validation steps into robust, repeatable test automation.

Knowledge Level A keen interest in cloud-native applications, Kubernetes and Terraform is required. Existing knowledge in Robot Framework will be helpful.

Workshop Agenda The first half of the workshop will be about understanding Kubernetes, using kubectl and Helm. We'll deploy a cloud-native application and analyse its configuration. In the second half we'll start automating using the KubeLibrary as well as Terraform via the TerraformLibrary.

Preparation and Technical Requirements You can bring your own Kubernetes cluster. But we can also work with a locally installed cluster via KinD (Kubernetes in Docker) https://kind.sigs.k8s.io/. Other than that you'll need the following tools installed:

  • kubectl
  • opentofu or terraform CLI
  • Robot Framework
  • Helm
Beyond Static: Demystifying Dynamic and Hybrid Libraries in Robot Framework
🔗
03 Mar 08:00 am
Henrik Schütte
Michael Hallik

Unlock the power of dynamic and hybrid libraries in Robot Framework!

Have you ever struggled with maintaining multiple test libraries or exposing modular Python code cleanly to Robot Framework? The Robot Framework library API provides the solution!

This workshop teaches you how the dynamic library API works and shows you when and how to use it for cleaner, more scalable test automation.

You'll build a unified test library from modular Python components, learn how keywords are exposed at runtime and gain practical skills for structuring maintainable RF test frameworks.

Description

Unlock the power of Robot Framework’s lesser-known dynamic and hybrid library APIs.

As automated testing scales across large, integrated systems, test frameworks must strike a balance between clean usability for testers and modular maintainability for developers. In this workshop, we’ll explore how Robot Framework’s dynamic library API enables just that.

Imagine testing dozens of RESTful services: user, orders, payments and many, many more. You also need to deal with authentication & authorization. With static libraries, you'd need a tangle of imports and exposed implementation details in every test suite. Instead, we’ll show how to build a single, unified RF library that dynamically loads service-specific Python modules at runtime. This allows test authors to use concise, readable test cases like Create New User or Verify Payment Received through one import, regardless of where the logic lives under the hood.

You’ll learn when and why to choose dynamic (or hybrid) libraries over static ones, how to implement them cleanly, and how this approach empowers cross-functional teams to scale their test automation without creating a mess.

This workshop is designed for test engineers and automation developers who want to move beyond simple static libraries and build flexible, scalable, and maintainable keyword libraries. Through a hands-on example based on REST API testing, you’ll learn how dynamic libraries work under the hood, when and why to use them, and how to structure Python-based libraries that expose clean, unified interfaces to test authors.

Key Takeaways

  • Understand the differences between static, dynamic, and hybrid libraries in Robot Framework.
  • Learn when and why to use dynamic or hybrid libraries in real-world test automation projects.
  • See how Robot Framework discovers and executes keywords at runtime through the dynamic and hybrid API.
  • Build maintainable dynamic and hybrid libraries that pulls together multiple Python modules under one import.
  • Write clean, high-level test cases while keeping the complexity hidden in well-structured code.

Knowledge Level

  • Robot Framework: Intermediate (understanding of test case syntax, keyword usage, and test suite structure)
  • Python: Intermediate (modules, classes, methods, basic Python syntax)
  • HTTP/REST APIs: Basic understanding of REST concepts and HTTP verbs (GET, POST, PUT, DELETE)
  • Python Requests Library: Preferable, but not required (we will walk through usage in examples)

Agenda

1. Introduction

  • 1.1 Why RF scripting isn't always enough
  • 1.2 Introduction to the RF library APIs: Static vs. Dynamic vs. Hybrid
  • 1.3 Taking a peek under the hood

2. Use Case: REST API Testing Framework

  • 2.1 Overview of the demo application (multi-service REST API)

3. Static Library

  • 3.1 Build the library
  • 3.2 Using the Library in Tests
  • 3.3 Limitations of the static approach

...

Talks

RoboCon Online Opening & Robot Framework Updates
🔗
Miikka Solmela
René Rohner
Pekka Klärck

Kick off RoboCon Online with Miikka and René sharing Foundation updates and community highlights, followed by Pekka Klärck presenting the latest developments and roadmap for Robot Framework.

Kick off RoboCon Online with Miikka and René sharing Foundation updates and community highlights, followed by Pekka Klärck presenting the latest developments and roadmap for Robot Framework.

Case Study: AI-Enhanced Test Automation Solution for a Major Bank Using Robot Framework
🔗
Yibo Wang
Hazem Khaled
Matthias Puschendorf

For a major German bank, we automated SAP testing using Robot Framework. The solution verifies data mapping, interfaces, data initialization, and regulatory reports. Test cases run in CI/CD pipelines, with results synced to Jira/Xray. As part of QA, the automation validates test artifacts and generates reports via Jira Structure. In addition, Generative AI supports both test automation and QA, including a pull request analyzer that aligns PRs with Jira stories, ensuring traceable, maintainable, and auditable testing across SAP environments.

For a major bank in Germany, we built a comprehensive test automation solution for SAP landscapes using Robot Framework — designed to enhance consistency, traceability, and speed in quality assurance within a highly regulated banking environment.

The solution automates end-to-end SAP workflows, including:

  • Data mapping validation across applications
  • Inbound and outbound interface checks
  • Verification of data initialization
  • Validation of regulatory reports

Existing test cases are automated with Robot Framework and integrated into a CI/CD pipeline. Test results are automatically synchronized with Jira/Xray test executions via the Jira API, ensuring full traceability across requirements, tests, and defects.

The automation also performs formal validation of test artefacts in Jira (test cases, test plans, and test executions) and generates comprehensive reports using Jira Structure.

We also leverage Generative AI to support and enhance the test automation process. A key differentiator is our AI-driven quality assurance: a business-centric Pull Request Analyzer verifies whether pull requests align with corresponding Jira user stories, ensuring functional accuracy and completeness.

AI-based checks further validate both manual and automated test scripts — evaluating their structure, completeness, and clarity against internal quality standards and coding guidelines. This ensures high-quality documentation, easier maintenance, and more reliable test implementation.

In this talk, we will share our journey of integrating Robot Framework with SAP testing, CI/CD, Jira/Xray, and AI-based quality assurance. Attendees will learn how to scale Robot Framework in enterprise SAP environments and how AI can elevate both automation and documentation quality.

Back In To Queue With MQLibrary
🔗
Elout van Leeuwen
Niels Janssen

IBM MQ is the backbone of asynchronous communication in today’s complex microservice landscapes, trusted by governments and enterprises for mission-critical reliability. Yet Robot Framework ecosystem lacked native IBM MQ support, forcing testers into fragile workarounds far from production reality. That’s why we built MQLibrary: a PyMQI-powered wrapper enabling seamless, production-like MQ interaction in automated tests. In this talk, we’ll share our journey, challenges, and how MQLibrary takes test automation to the next level.

Hi, our names are Niels Janssen and Elout van Leeuwen. We’re test automation engineers and for the past year we worked for the Employee Insurance Agency (UWV) in The Nederlands. UWV is known for it’s microservice landscape. These microservices mostly interact with the help of IBM MQ (message queues). A message queue is essentially a mailbox. A simple example could be that one application puts something in the mailbox, while another application can get things out of that mailbox.

The advantage of using IBM MQ is that not both services need to be up and running all the time to communicate (asynchronous). Instead, we use the message queue. Now one application can put a message on the queue and can go offline, while the other application can get the message from the queue whenever it goes online. There are way more cases and variations to use message queues, this is just a simple explanation to be able to grasp the concept.

IBM MQ is still one of the most used message queue middleware applications to date within government corporations. While working with message queues at my current client, we noticed a lack of support for automating message queues with Robot Framework. Because of this shortcoming, we saw workarounds to simulate queues, for example by forcing services to use a windows directory as the “queue” and placing “messages” into that folder. Other services were configured to read from this directory instead of a real queue, and tests/assertions were done on the files within these directories.

But offcourse this is not ideal because within testing we should always strive towards a test environment that is as close to the production environment as possible, as discussed in the TMAP literature. In our search for a suitable solution we did stumble apon a Python package called ‘pymqi’ with which automating of IBM MQ is possible, but this was not yet properly integrated into the robot framework ecosystem. This is why we created the MQLibrary. MQLibrary acts as a wrapper for pymqi and makes interacting with queues possible directly from within Robot Framework.

AI-Powered Bug Classification and Creation from Robot Framework Test Reports
🔗
Rwan Al-Halwan
Mohamed Sedky

Discover how AI and Large Language Models (LLMs) can revolutionize software quality assurance by transforming Robot Framework test reports into actionable bug insights. This talk introduces an automated pipeline that classifies, summarizes, and creates bug tickets directly from Robot Framework results — integrating seamlessly with tools like TFS and Jira. Attendees will learn how to bridge testing and defect management intelligently.

Modern QA teams generate thousands of Robot Framework test logs and reports, but extracting meaningful insights from them — especially identifying and documenting bugs — remains a manual and time-consuming process.

This session presents a novel AI-driven Bug Clarification and Creation framework, leveraging Large Language Models (LLMs) to automatically interpret Robot Framework outputs and turn them into structured bug reports.

Key topics covered:

Parsing and enriching Robot Framework test results with metadata (suite, test, logs, screenshots).

Using LLMs to analyze failure patterns and generate human-readable bug summaries.

Intelligent bug classification: functional vs. performance vs. environment issues.

Automated bug creation: seamlessly pushing reports to TFS, Jira, or any modern ALM tool via APIs.

Integration patterns and architecture design for hybrid setups (on-prem or cloud).

Real-world demo: converting a Robot Framework test log into a detailed, ready-to-triage bug ticket.

Takeaways:

Learn how to connect Robot Framework’s structured outputs with LLM reasoning.

See practical steps to automate defect triage and documentation.

Understand how this approach reduces human effort, increases accuracy, and accelerates release cycles.

This talk is ideal for QA engineers, automation leads, and AI enthusiasts seeking to bridge the gap between test automation and intelligent defect management.

RFSwarm Update
🔗
Dave Amies
Arkadiusz Kuczyński

An update on what's been happening with RFSwarm since Robocon 2024, and where we are headed with RFSwarm.

What's new with RFSwarm

  • New features that have been added
  • Contributions to RFSwarm by NiceProject and introduce Arkadiusz, he will give a short talk about his contributions and the benefits of contributing to robot framework ecosystem projects
  • RFSwarm tutorial videos
  • RFSwarm LinkedIn group Where we are headed with RFSwarm.
  • Planned features
  • More tutorial videos
Robot Framework SchemathesisLibrary, what it is for and why I did it?
🔗
Tatu Aalto

I did build a new library for ease testing REST interfaces which have OpenAPI schema. This talk points out how that library works and how good tools we have already in the Robot Framework ecosystem to help library developers. I also want to highlight what was my personal motivation to build yet another library for users and for me to maintain as developer.

Robot Framework SchemathesisLibrary, what it is for and why I did it? RoboCon 2025, there was many talks which pointed me to look at Schemathesis project direction. After some reading and trying the Schemathesis out with some dummy projects, I tough that Schemathesis project looks really interesting. I tough this because Schemathesis promises to automatically generate thousands of test cases from OpenAPI or GraphQL schema and finds edge cases that break your API. It also nicely fits for me, because I have blank spot in API testing. Although I am familiar with APIs and have done some API testing in the past, I am not very proficient with OpenAPI schemas.

When I did start creating SchemathesisLibrary, I did set out few goals for me. First I should learn how to build a REST service with modern Python tools and how using OpenAPI schemas enables automatic test case generation. Secondly this project should give me better backgorund when talking at work about building RESt services and why doing an OpenAPI schema is an good idea.

Did I achieve all my goals, well to be honest, only partially. But along the way building the SchemathesisLibrary and discovering features from Schemathesis, Robot Framework, DataDriver and many other things. So although I did not reach all my goals, I along the way I did discover new paths to discover and learn. In conclusion, the project can be considered successful from my perspective and I hope that it is also usefull for the community.

From 7 Tools to One: How Robot Framework United Automation Across a Complex Enterprise
🔗
Haziz CISSE

As head of QA, I introduced Robot Framework as a single automation platform to replace six tools used across multiple departments. Without any API, I integrated it with ALM-QC, built a full ecosystem for data generation, functional automation, and end-to-end testing. Over 80 users are now trained and automated hundreds of tests. We build a one-click installer that sets up Robot Framework and all libraries to ease installation for users. Beyond the technology, this initiative created a unified QA culture and made automation accessible to everyone. Now some people want to use it for RPA purpose.

Automating Map Operations and Testing in QGIS with Robot Framework
🔗
Michal Pilarski

Automation in Geographic Information Systems (GIS) is vital for reliability and efficiency in spatial data processing and testing. This paper introduces a framework for automating open-source QGIS UI operations using Robot Framework, PyAutoGUI, and PyWinAuto. It enables automated map interactions - creating, editing, and validating spatial features - through reusable, readable test/task keywords. The approach streamlines testing, reduces manual effort, and improves reliability in geospatial workflows. Please check: QGISLibrary (https://pypi.org/project/QGISLibrary/)

Automation in Geographic Information Systems (GIS) is increasingly essential for ensuring consistency, reliability, and efficiency in spatial data processing and map-based software testing. This paper presents a comprehensive approach to automating user interface (UI) operations and tests within the very top open-source QGIS (Quantum GIS) using the Robot Framework. The focus is placed on automating map interactions, such as creating, editing, and validating spatial features - including points, lines, and polygons - directly within the QGIS Desktop graphical environment. The proposed automation framework integrates QGIS locators (Qt5, Qt6) as UI objects, PyWinAuto and PyAutoGUI python libraries to automate UI operations, and Robot Framework for design, execute and report tests. By combining above technologies, it allows to automate workflows that typically require extensive manual effort, such as digitizing vector layers, snapping features, setting symbology, and performing topological validation. Through the Robot Framework’s structured and modular test design, each QGIS UI action - like drawing geometries (example: drawing river as a line on map canvas) - can be expressed as reusable, human-readable test keywords. These keywords abstract low-level operations, enabling QGIS analysts or Geographers to build complex automated scenarios without deep programming expertise. Overall, this work contributes to the field of geospatial software engineering by providing a replicable strategy for automating tests of spatial UI workflows, especially for Plugins in open-source GIS platforms. It highlights how the Robot Framework streamlines quality assurance processes, accelerate development cycles, and enhance the reliability of spatial data operations. The result is a powerful, flexible testing solution that empowers GIS professionals and developers to ensure that map creation, editing, and analysis tools function correctly across diverse environments and datasets - without the need for repetitive manual validation.

RoboView
🔗
Alena
Marc David Sutjipto

Test automation with Robot Framework has become an integral part of many projects. Over time, these test automations grow, and keeping track of the numerous created keywords and file structures becomes increasingly unwieldy. To counter this, the RoboView tool has been developed with the goal of improving keyword management and providing deeper insights into one's projects to support refactoring.

Since keywords are the fundamental building blocks of tests, RoboView specifically concentrates on them. The objective of this approach is to provide users with a clear and organized display. Both tabular representations and visual views in the form of graphs are utilized, allowing users to quickly gain an overview and then conduct more detailed investigations at a granular level.

The tool will be offered as a VSCode extension, as the Robot Framework community predominantly uses extensions in this format. This approach enables us to reach the majority of users for RoboView. Additionally, providing it as a VSCode extension allows for straightforward installation and usage of our tool.

RoboMonX == Robot Framework Test Status Monitoring for Xray
🔗
Ivaylo Brüssow
Andrej Nod

RoboMonX is shaking up test automation and how it's documented: With a real-time connection between Robot Framework & Xray for Jira, test results are sent incrementally. This gives you instant transparency, early error detection, and more efficient decisions in the development process. You only have one place to be and no more annoying media breaks.

In modern software development, the integration of test automation into test management is a key success factor for quality assurance acceptance. In this talk we will present RoboMonX: a novel solution for the dynamic linking of test results from the Robot Framework with the test management tool Xray for Jira.

In contrast to conventional approaches, which require the results to be transferred at the end of the test execution, RoboMonX enables an incremental and event-driven update of the test plan in Xray. Each single test case is submitted to Xray immediately after execution, providing a real-time status of test progress and results in the test management system.

RoboMonX addresses the limitations of traditional integration approaches and offers significant benefits in terms of transparency, responsiveness and efficiency of the test process. The early detection of deviations and the continuous availability of up-to-date test results provide support for informed decision making in the development process.

Problem definition: Inefficient test reporting and delayed feedback of automated test case results, making it difficult to respond to defects in a timely manner.

Our approach: Development of a customized integration solution between Robot Framework and Xray using event-driven mechanisms for real-time transmission of test results.

Results: Increased transparency and dynamic visualization of test progress in Xray. Early detection of inefficiencies and potential risks in the test process. Improved decision making through timely availability of test results. Potential for increased efficiency and quality in software development.

Target Audience: The talk is aimed at professionals in the field of software development and quality assurance, test automation experts, developers QA Leads and product owner dealing with current test management challenges and solutions.

During the talk we will present RoboMonX and the results achieved but also discuss the potential for future developments.

“Yes we can do that with Robot Framework!!!” -The Art of Convincing Leaders to use Robot Framework
🔗
Rohith Ram Prabakaran

In today’s fast-paced tech landscape, we work with a wide range of tools and technologies. However, convincing non-technical stakeholders, such as business users and leadership teamsto adopt a particular tool or framework can often be challenging.

In this talk, we will explore ideas, proven techniques and practical strategies to effectively communicate and convince people to use Robot Framework, build stakeholder confidence, and drive organizational adoption.

As a Technical Pre-Sales Professional and Advisory Automation Solution Architect, I often work with global clients on automation proposals and consulting engagements. Provided the right fit, it’s relatively easy to convince technical teams to use Robot Framework, but the real challenge lies in influencing leadership and business stakeholders — who often make final decisions based on factors like cost, support, and ecosystem dependencies.

In this Talk we will go over a full length process of how we can understand an Automation ecosystem and how we can convince both Company and Client management in using Robot Framework.

In this talk, we’ll walk through a complete process for understanding an organization’s automation landscape and effectively positioning Robot Framework as the right choice — both technically and strategically.

We’ll explore key unique selling points (USPs) of the framework, including:

Ease of use Flexibility and adoptability Support for various tech stacks Extensive library ecosystem Availability of ready made libraries Space for Cusomization Vast community support Comparison with other Licensed and low code tools in the Market

This session aims to equip Robot Framework enthusiasts and practitioners with practical insights on how to make a compelling pitch into using Framework.

KeyTA 2.0: The easiest way to use Robot Framework
🔗
Marduk Bolanos

KeyTA is a web app that allows anybody to get started using Robot Framework. It does this by providing a simple user interface that combines the strengths of a REPL, a spreadsheet and a web browser. As a result, it augments both the Robot Framework DSL and the execution engine with new features: auto-looping over lists, execution of individual keywords, test execution starting from any step, and many more. This talk will provide a live demo using the Browser library showcasing the advantages of using KeyTA for web automation.

KeyTA is a simple web interface designed with the goal of allowing anybody to quickly get started using Robot Framework. It is optimized for user comfort and thus aims to provide a fast feedback loop. In particular, individual keywords can be directly executed and test cases can be resumed from the step that failed.

KeyTA was born out of the necessity to enable domain experts with no programming knowledge to leverage Robot Framework to automate processes and tests. They are used to working with graphical user interfaces (e.g. Excel, SAP) and they want to stay in this familiar environment when automating tasks they usually perform by hand.

KeyTA is being developed at NRW.Bank, the state development bank of the federal state of North Rhine-Westphalia in Germany, and a member of the Robot Framework Foundation. The core of the application was released by the bank as open-source software and imbus continues its development on GitHub.

This talk will provide a live demo that should serve as an introduction for new users. A short test case will be created from scratch using the Browser library. Along the way several features of KeyTA will be illustrated and the advantages of using it for web automation will become apparent.

From Flaky Chaos to Clear Signals: PyCharm's UI Test Observatory
🔗
Denis Mashutin

PyCharm QA team stopped chasing green and switched to the "observability over stability" approach. This talk will share our workflows for monitoring trends and tell the story of creating the 100% vibe-coded, stateless solution that builds real time views from API requests, highlights similar failures, and draws attention to regressions.

Like many teams, we used to treat UI tests as something that must be green. For months after introducing them, the PyCharm QA team fought flakiness, managed mutes across environments, and tried to keep up with monorepo changes from hundreds of developers. We shifted to monitoring trends instead of day-to-day statuses and chose a bird’s-eye view of the system over inspecting single failures in a specific build or environment.

This talk shares our approach and the lightweight tool that enables it. The TestKeeper Service is a 100% vibe-coded solution with no FTE spent. Its stateless architecture builds views in real time from API requests, with no deployment or database maintenance. Instead of showing which tests failed, our service focuses on trends, highlights similar failures, and draws attention to cases where we should reproduce the failure manually.

Attendees will learn the following:

  • The workflows we developed to enable the observability approach and complement the tool: recognising typical patterns of trends, standard steps to reproduce the issue, and distinguishing problems in the product from defects in tests
  • Real cases from PyCharm: how we manage to spot and catch regressions against the background noise of flakiness
  • Guardrails that we use to balance extending the coverage and fixing defects in tests, in addition to our overall approach to developing new tests
  • How a stateless, zero-FTE, API-based service can deliver a significant impact, and how to apply a similar design in your context

The main goal of the talk is to provide evidence that observability over stability is a valid direction for developing a testing framework, especially for UI tests and complex systems. I want to show colleagues a better alternative to spending man-hours on fixing flaky tests, and how a vibe-coded internal tool became a game changer in the quality assurance infrastructure of PyCharm.

Bringing Robot Framework to the Factory Floor: Production Testing for Embedded Systems
🔗
Paweł Wiśniewski

Robot Framework is a powerful tool in development and QA—but its usefulness doesn't stop there. In this talk, I’ll demonstrate how we apply Robot Framework in a production environment to validate embedded hardware during manufacturing.

You'll see how we use Robot Framework to automate hardware validation during manufacturing—from the moment an assembled PCB arrives, through functional testing, to the final checks before shipping the product.

Speed up test automation: 5 levels of caching
🔗
Sander van Beek

The key to fast tests is to do fewer things. Reusing previously done work is a great way of doing fewer things without changing what your tests do. Learn about 5 levels of caching to speed up your test runs.

"Let me quickly fix that test before I log off for the day". Before you know it, it's 20:00, you're still running tests, you're really hungry for some inexplicable reason, you see the tests doing the same thing over and over again, you're ready to throw your laptop out of the window, if it would only open but even the window is being difficult (your phone is blowing up), the doorbell rings, and aâ̶̊ͅar̷̡̟͋̕͠g̵̣̰̫̉̆͠hh̸͖͙̃̈h!

Bad test performance is a universal annoyance. "Quickly" running some tests can take forever. But it can also be really hard to figure out how to speed things up. The result? Blankly staring at your screen, getting distracted, and annoyance slowly building up until you ~rage quit~ give up for the day.

To rid myself of this frustration, I make my tests faster. Fundamentally, there are only 2 core principles to speeding up your tests without impacting their contents:

  • Do things simultaneously — Maximize CPU usage
  • Do fewer things — Reduce CPU time

Caching is a way of doing fewer things. In Robot Framework, there are 5 levels of caching:

  1. Test variable Store a value and reset it when the test finishes.
  2. Suite variable Store a value and reset it when the test suite finishes.
  3. Global variable Store a value and reset it when the test run finishes.
  4. Pabot variable Store a value, share it with parallel test runners, and reset it when all tests finish.
  5. Cache file Store a value, share it with parallel test runners, share it with the future test runs, and reset when the expiration time has passed.
Moving away from global resource files by utilizing AI: a case study
🔗
Silken Kleer

Do you use an "import everything" file throughout your codebase? Do you encounter maintenance headaches as a result? Do you have good intentions of addressing this but are having trouble making it a priority? This talk moves beyond the theory on why these files are an anti-pattern, and provides strategies and insights from a real-world example of eliminating these global resource files. By leveraging AI, we can reduce the grunt work involved and make this previously overwhelming refactoring challenge much more achievable.

Note: this is a copy of the information I provided in the "lessons learned" field (but with markdown).

Drawing from a real refactoring project, this talk provides concrete techniques for breaking up global resource files with the assistance of AI.

Topics covered:
  • How we got here: Background on why codebases often implement this practice.
  • Motivation: Why reducing reliance on global resource files is desired.
  • AI memory simulation: Track keyword and variable definitions to aid in import redistribution.
  • IDE integration: Combine diagnostic tools with AI to guide refactoring.
  • Context management: Handle AI limitations when working across many files.
  • Import cleanup: Detect and address unnecessary imports AI may introduce.
  • Practical validation: Balance thoroughness with practicality when checking AI output.

Statistics on the codebase size and complexity will be provided, helping participants assess how these approaches will scale to their own projects.

Most importantly, participants will be inspired to tackle similar work in their own codebases.

Robot Framework RPA and AI Agents: A Powerful Combination
🔗
Joshua Gorospe

The field of automation is constantly evolving. Currently there is a common misconception that LLMs and AI agents are only useful in the vibecoding trend. That trend has captured the attention of the tech industry since the start of 2025 and has sparked a lot of discussions on several social media platforms. Also there are at least 60+ agent projects and 4000+ MCP projects tracked in pulse.com today. My talk will demonstrate how combining Robot Framework's ecosystem with locally running AI Agents and LLMs can lead to a powerful combination.

This presentation will give a high-level walkthrough and demonstration of how Robot Framework RPA can combine with local AI Agents, MCP, and various LLM types to enhance their capabilities. This talk will discuss the following main topics below.

  • Brief introduction to the open source AI Agent, LLM, and MCP ecosystem landscape.
  • Overview of Codename Goose (https://block.github.io/goose/), an open source AI Agent framework project developed by Block (Jack Dorsey's company).
  • Talk about how Ollama (https://ollama.com/) can be used to setup a locally running private LLM instance on your own hardware.
  • Walkthrough/demonstration of the basic design of using Robot Framework RPA to automate sequential tasks with Codename Goose on a local LLM that can run on anyone's hardware.
  • Talk about the Codename Goose Docker container and give an overview of situations where some models are too big and demanding for anyone's personal hardware.
  • Walkthrough of the basic design and building-blocks of using Robot Framework RPA to automate parallel tasks with parallel running Codename Goose Docker containers connected a cloud AI product such as Google Gemini.

This is the public GitHub repo containing all of the automation demonstrations mentioned above. https://github.com/jg8481/Robot-Framework-AI-Agent-Datadriver

Database Library Update
🔗
Andre Mochinin

The Database Library has got multiple releases in the last 2 years with quite a lot of changes - the talk includes on overview and details on the most important ones.

What’s New in RobotDashboard: Smarter Insights, Improved Interfaces, Enhanced Usability
🔗
Tim de Groot

Over the past year, RobotDashboard has evolved from a simple visualization tool into a mature open-source project. In this session, I’ll share lessons learned from building and maintaining this tool, highlight new features like custom database integrations, built-in server capabilities, customizable layouts, additional pages, and an improved interface and performance. I will also demonstrate how these improvements enable truly data-driven insights such as spotting flaky tests, identifying long-running suites, and detecting regressions earlier.

Over the past year, RobotDashboard has evolved from a simple visualization tool into a mature open-source project that helps teams turn Robot Framework test results into actionable insights. In this session, I will share lessons learned from building, maintaining, and growing RobotDashboard. This includes challenges faced when supporting multiple Robot Framework versions, incorporating community feedback, and deciding which features to implement and prioritize. These experiences offer valuable insights into maintaining an open-source project, balancing user needs with technical constraints, and ensuring long-term usability and adoption.

I will also highlight the new features that make RobotDashboard more powerful and flexible than ever. These include custom database integrations, which let teams store test results in a way that fits their infrastructure; built-in server capabilities, enabling real-time access to both the database and the dashboard; customizable layouts, allowing teams to tailor the dashboard to their needs; and an improved interface, providing faster and more intuitive navigation of complex test results.

The session will also show how these enhancements translate into deeper, data-driven testing insights. Attendees will see how RobotDashboard can help spot flaky tests, identify long-running suites, detect regressions earlier, and analyze trends across multiple test runs. By combining historical data with the new features, teams can move from simply reporting test outcomes to understanding patterns and making better testing decisions.

Through practical demonstrations, real-world examples, and lessons learned from maintaining an open-source tool, this talk will provide attendees with both inspiration and actionable takeaways for improving their testing workflows. You will leave with a clear understanding of how to extract more value from your test results using RobotDashboard.

Tutorials

These tutorials are complimentary for all ticket holders and will take place between the Community Days, starting at 13:00 CET. You’re welcome to drop in and out as needed, but you’ll get the most value—and a complete learning experience—by staying for the full session.

AI-Aided Software Development – Becoming an AI-Ready Engineer
🔗
06 Mar 11:00 am
Ismo Aro

Turn your software development skills AI-ready. This 2 hour tutorial takes you from idea to production, teaching how to use modern AI tools effectively in real-world software engineering.

AI-Aided Software Development – Becoming an AI-Ready EngineerTurn your software development skills AI-ready. This one-day hands-on training takes you from idea to production, teaching how to use modern AI tools effectively in real-world software engineering.Why Join Modern engineering is evolving fast — and AI is already part of the workflow. This training helps you become an AI-Ready Engineer: a developer who knows how to guide, supervise, and collaborate with AI tools to accelerate development safely and transparently.What You’ll Learn

How to use AI-assisted coding tools like Cursor, GitHub Copilot, and Continue.dev in daily development
Safe and effective prompting, guard rail design, and collaborative working techniques that improve code quality, safety, and productivity
Building and deploying a fullstack application (frontend, backend, and CI/CD pipeline)
Integrating testing and documentation as part of an AI-driven workflow

How It Works Participants will work hands-on throughout the day, developing a fully functional social media application called Yapster, including:

React + TypeScript frontend
Express + TypeScript backend
Automated testing with Robot Framework
CI/CD deployment to Azure

Duration One full day – including two main sessions: :one: From Idea to Production :two: From Backlog to ProductionOutcome By the end of the training, participants will:

Understand how to integrate AI tools into existing workflows
Know how to maintain guard rails for safety and quality
Be ready to apply AI-assisted development practices in real projects.

Powered by NorthCode Human creativity. AI acceleration.

  • Write acceptance tests in Robot Framework that clearly describe behavior
  • Guide an AI tool (e.g., GitHub Copilot) to implement a feature starting from tests
  • Commit tests and code to GitHub; let GitHub Actions run regression checks, verify new functionality, and deploy safely to production

Key Takeaways

  • AI tools need structure and feedback loops — human-readable tests provide both
  • Robot Framework bridges natural language and code, improving shared understanding
  • ATDD becomes a collaboration model for humans and machines
  • A concrete workflow combining Copilot, Robot Framework, and CI/CD for reliable, AI-augmented delivery

Who Should Attend

  • Developers and testers exploring AI-assisted development
  • QA engineers investing in living documentation
  • DevOps/platform engineers integrating AI into CI/CD
  • Anyone interested in practical human–AI collaboration in software delivery
Tutorial on Automation with Image Recognition Libraries - SikuliLibrary (and ImageHorizonLibrary)
🔗
06 Mar 01:00 pm
Hélio Guilherme

This tutorial, is about using image recognition libraries to automate tasks or testing, when it is costly or difficult to obtain object identifiers in the applications under test. We will use the libraries SikuliLibrary and ImageHorizonLibrary to automate applications we do not know about their internal components. This is what is called Black Box Testing. We will practice automating a Login in a VMWare/VirtualBox Windows system and then doing some actions.

Contents:

  • About Image Recognition Libraries - SikuliLibrary (and ImageHorizonLibrary)
  • Knowing the Java based SikuliX IDE and its possibility to run Robot Framework test cases.
  • SikuliLibrary: -- Installation -- Planning the Test Suites file structure -- Defining Test Cases and Resources -- Running Test Suites
  • Combining SikuliLibrary keywords with ImageHorizonLibrary
  • Practice in automating a Login in a VMWare/VirtualBox Windows system and then doing some actions.

About: Image recognition libraries are used to automate tasks or testing, when it is costly or difficult to obtain object identifiers in the applications under test. These libraries use Computer Vision (OpenCV) to match reference images with a copy of the computer screen, and also Optical Character Recognition (OCR) for text extraction. With these techniques and operation system actions like mouse movement and keyboard strokes, the system can replicate the actions of the human user.

SikuliLibrary is a Robot Framework library that allows to use the SikuliX Java API. It uses Robot Framework Remote to interface Python functions with the SikuliX Java libraries, so it needs to have Java Runtime Environment installed in your system. -- diagram from project: https://github.com/MarketSquare/robotframework-SikuliLibrary/blob/master/docs/img/architecture.png -- The usual workflow for a Test Case or Task is:

  • Import SikuliLibrary and start its server
  • Define the location for the reference images
  • Start the Application Under Test (AUT)
  • Interact with the AUT by actions of mouse, keyboard, matching of reference images on the screen, and Optical Character Recognition (OCR) for text extraction.
  • Complete the workflow by stopping the server.

SikuliX IDE:

  • Installation SikuliX IDE, which requires Java
  • Creating and Running a Test Case with SikuliX IDE

SikuliLibrary:

  • Installation
  • Planning the Test Suites file structure
  • Defining Test Cases and Resources
  • Running Test Suites

ImageHorizonLibrary is a Robot Framework library, based on pyautogui and other Python modules, and optionally opencv-python for adjusting the image recognition precision. This library does not have Optical Character Recognition (OCR) keywords. Similarly to SikuliLibrary, it uses reference images to interact with the AUT on the screen. We can say that the usual workflow is the same as the one with SikuliLibrary, except for the server and OCR parts.

Combining SikuliLibrary keywords with ImageHorizonLibrary: -- Installation of ImageHorizonLibrary -- Adjusting Test Suites to use SikuliLibrary and ImageHorizonLibrary simultaneously (conflicting keyword names) -- Running Test Suites

Practice in automating a Login in a VMWare/VirtualBox Windows system and then doing some actions.

Watch Parties

This year, we’re introducing a special way to gather locally in Watch Parties and enjoy the RoboCon talks together. Your host will also arrange additional program to make the most of the day. Depending on the setup, this might include a hands-on workshop, a tutorial, or just casual drinks and food. Some parties may run in the morning or evening, before or after the main program.

We’ll publish the list of companies hosting Watch Parties closer to the event. If you’d like to host one, get in touch!

Community Day

This is your day to set the agenda. Community Day is a free “unconference,” where attendees propose and vote on topics at the start. It’s a vibrant, hands-on space for sharing ideas, learning, and getting direct help from experts in the ecosystem.

To cover time zones, we’ll host two sessions:

  • EMEA Community Day – 09:00 CET (~4h)

  • Americas Community Day – 5:00 PM ET (~4h)

Both take place in Gather.Town, our interactive online world where you join with your avatar, meet others, and keep discussions flowing in a fun, spontaneous way.