Begin your week with a deep dive. Full-day workshops are practical, hands-on training sessions led by renowned Robot Framework experts. This is your chance to master new skills and expand your knowledge in a focused, immersive setting.
Free “unconference” in a classroom setting, where attendees shape the agenda at the start of the day. It’s the perfect chance to meet people before the main conference, build on the workshops from the day before, and connect with top experts from around the globe.
Work on hands-on projects, join discussions, or simply get your questions answered.
Two full days filled with inspiring talks from speakers across the globe, spanning the entire Robot Framework universe. Mingle with fellow stars, ask questions after each presentation, and meet the speakers directly at the Speakers’ Corner.
Bio Rex auditorium in central Helsinki offers a spectacular setting for our talks, including spacious areas for connecting with fellow participants and speakers during breaks.
Join us on Thursday evening at Laulumiehet (10-minute walk from Bio Rex) for the RoboCon Dinner — a relaxed evening of good food, conversation, and celebration. The venue is reserved exclusively for us, with a buffet dinner (menu to be announced) and a drink included.
We’ll also host a short awards moment to recognize our Ambassadors and RFCP Partner of the Year, before continuing the evening in a laid-back, social atmosphere until 23:00. Tickets are sold separately.
When the final talk wraps up, the celebration begins. Join us at the official after-party, hosted by our event sponsor VALA, at their office in Helsinki.
This legendary gathering is the perfect chance to relax, connect with the community, share stories, and build new friendships in an informal atmosphere.
Admission is included with your conference ticket, and complimentary food and drinks will be provided.
Still in Helsinki after the conference? Join us for our special Saturday social event. Usually a sightseeing tour with a local guide, it’s a relaxed way to discover Helsinki and enjoy a few last memories with your fellow attendees before your journey home.
Details for this year's social event will be revealed shortly.
Workshops run from 09:00–16:00, with a complimentary lunch mid-way. Included in the Full Ticket or available separately. Location TBA.
Learn the internals of the Robot Framework Browser library in this advanced hands-on workshop.
Topics include architecture, scopes (browser, context, page), selectors, promises, and tracing.
You'll build and use custom keywords with JavaScript and Python, extend the library via plugins, and explore advanced features.
A full‑day, hands‑on workshop exploring the Robot Framework Browser library powered by Playwright—with a focus on deep architecture, keyword extensions in JavaScript and Python, and advanced automation techniques.
Join us to elevate your web automation skills through the modern, high-performance Browser library. This workshop gives you the knowledge and practical experience to both use and extend Browser like a pro. Whether you're automating complex workflows or building custom plugin keywords, you'll gain the expertise to architect future‑ready automation frameworks.
Day‑Long Agenda:
Architecture Deep Dive
Installation & Initialization
pip and rfbrowser initCore Keyword Usage & Logging
Selector Strategies & Advanced Waiting
text, css, and xpath selectorsPromise To, Wait For, and Network‑idle handling for reliable waitsExtending Browser with JavaScript Plugins
Python Plugin‑API & Using Browser from Python
Advanced Keywords in Action
Real‑World Workshop Labs & Troubleshooting
robotframework‑browser‑advanced‑workshop repoDeployment, CI/CD & Tips from the Core Team
Who Should Attend?
What You’ll Take Away:
Turn natural-language scenarios into executable Robot Framework tests and tasks with AI agents, the Model Context Protocol (MCP), and self-healing. In this hands-on workshop, we combine robotframework-aiagent with RF-MCP for contextual assistance and robotframework-selfhealing-agents for resilient execution. Practice AI-assisted test generation and execution, image/document/OCR workflows, and structured extraction. Connect it all in VS Code with RobotCode and GitHub Copilot. Build a multi-model pipeline to generate, run, and heal tests.
The gap between business intent and automated tests and RPA is closing fast. This workshop shows a practical end-to-end path to AI-powered Robot Framework development that translates human-readable scenarios into robust, maintainable test suites.
We start by wiring up robotframework-aiagent for natural-language test execution and AI-assisted test generation. You’ll integrate image analysis, document processing, OCR, and structured data extraction so agents can reason about UI states, PDFs, screenshots, and logs. Next, we introduce the Model Context Protocol (MCP) via RF-MCP to provide agents with rich project context (keywords, resources, env data) and enable semantic keyword matching, interactive step-by-step execution, and state-aware testing with intelligent suggestions and error recovery.
To make your suites resilient in the real world, we bring in robotframework-selfhealing-agents to automatically adapt locators, retry strategies, and flows—so flaky UI changes don’t break your pipeline. You’ll also configure multi-model workflows (OpenAI, Anthropic, Mistral, self-hosted cloud, and local models) and learn when to route tasks to specialized models for token efficiency and quality. In addition, we cover how to test AI applications with non-deterministic input and output using semantic assertions, tolerance windows. Finally, we connect everything inside VS Code with the RobotCode extension’s AI features and GitHub Copilot optimizations—so authoring, refactoring, and debugging are all AI-assisted.
Explore robotframework-platynui, a new cross-platform library for desktop UI automation with Robot Framework. This tutorial offers hands-on experience in setting up and utilizing PlatynUI to create robust, maintainable tests for desktop applications across Windows, macOS, and Linux.
Description This practical session introduces robotframework-platynui, a cross-platform UI testing library for desktop applications. You'll learn to set up your environment, use the PlatynUI Spy tool, create effective locators, and build your own test cases. Advanced topics include structuring test suites, remote test execution, and solving common automation challenges.
Topics Covered
Target Audience QA engineers, automation testers, and developers familiar with Robot Framework and Python, looking to expand into desktop UI testing.
Learning Outcomes
Format Includes short lectures and guided exercises using a sample application. Participants should bring a laptop with Python installed and basic Robot Framework knowledge.
OpenApiTools v1.0 introduced a CLI tools to easily generate a Robot Framework library based on an OpenAPI specification, a library with keywords for all endpoints of the target API. Keywords that automatically generate the required data, so no more need to specify all the request data for each request you want to make in your tests. Of course there's options to override specific values and ways to control the generated values.
In this workshop we'll dive into OpenApiTools; we'll generate our custom library to test our API and we'll look into tweaking / tuning it to our liking.
In addition to working with the generated library, we'll dig into the other libraries that are part of OpenApiTools, OpenApiLibCore and OpenApiDriver. We'll look at their keywords and their use cases and how they all fit together. And the session wouldn't be a deepdive if we wouldn't get into advanced usage of all these libraries by leveraging the mappings file (what it is, how to write it and what you can do with it).
A demo project will be available for the exercises during the workshop, but bringing your actual repo / project / OpenAPI spec and using that for the exercises is encouraged.
Lessons Learned: The participants should be able to apply the learning for the workshop in their project, allowing them to generate a keyword library for their target API and set up any required constraint mappings. The participant will also learn how the generated library relates to OpenApiLibCore and OpenApiDriver and the keywords those libraries offer, allowing the participant to effectively use all the tools in OpenApiTools to validate their target API.
Preparation and Technical Requirements
This allows you to fork the repo and do the exercises.
The repo that will be shared for this workshop contains a (VS Code) devcontainer configuration based on Docker. In order to use this devcontainer you'll need:
Dev Containers plugin to install it directly from within VS Code.The devcontainer contains everything needed to run the workshop exercises.
If for some reason you cannot run devcontainers, please contact me before the workshop so we can work out a setup that will work for your situation.
Sometimes, a simple sequential pipeline isn’t enough for advanced automation needs. Learn how to distribute and orchestrate Robot Framework workloads by applying the fundamentals of BPMN 2.0 process modeling. Discover how to use executable BPMN models to coordinate complex, end-to-end test and task automation scenarios with Operaton BPM, Robot Framework, and the surrounding open-source ecosystem.
Do you want to run distributed Robot Framework workloads—whether for long-running test cases, complex workflows spanning multiple applications, or integrated test and task automation? Maybe you need to combine manual and automated testing steps.
Join us for a hands-on workshop on using BPMN 2.0 process modeling to orchestrate both test and task automation. In this full-day session, you’ll learn how to model, execute, and integrate process-driven automation using open-source tools like Operaton BPM and Robot Framework.
Learn BPMN 2.0 modeling. Gain hands-on experience with freely available tools to design and model BPMN 2.0 processes. You’ll learn key concepts—tasks, events, gateways, and flows—and how to build clear, executable process diagrams for automation.
Execute BPMN process models. Discover how to deploy and execute BPMN models using open-source software. Learn the setup, configuration, and execution steps needed to run both test and task orchestration workflows efficiently.
Design BPMN-orchestrated end-to-end automation suites. Create process-driven automation that goes beyond simple test cases. Learn how BPMN can orchestrate complex, end-to-end workflows that combine system tasks, service calls, and Robot Framework tests.
Integrate Robot Framework with BPMN execution. Explore how to seamlessly connect Robot Framework tests with BPMN process execution. See how open-source tools can bridge business process automation and test automation for unified orchestration.
In this workshop you learn how to extend Robot Framework in various ways. We start from the more advanced parts of the library API and cover also various other topics such as the listener API, the parsing API, execution and result models, and so on.
In this workshop you will learn how to extend Robot Framework using various different interfaces using Python. The first half of the workshop is dedicated to the more advanced parts of the library API such as automatic argument type conversion and the dynamic library interface that is used, for example, by SeleniumLibrary. During the second half you will get familiar with other extending and integration possibilities such as the listener API, the parsing API, how to modifying tests dynamically before or during execution execution and how to analyze results.
This workshop is for you if you already know basics of using Robot Framework, including basics of the library API, and want to take your skills to the next level. These skills make it easier to adapt the framework to your own needs in different contexts. In addition to knowing Robot Framework, participants are expected to know basics of Python programming such as functions, modules and classes.
The workshop is 100% hands-on, no slides, learn-by-doing. In addition to learning from the person who has designed these powerful APIs, you have a change to ask hard questions related Robot Framework from its creator.
Join our advanced workshop and improve your usage of SeleniumLibrary. As intermediate and advanced users, we will explore topics like browser configuration, advanced debugging, extending the library, dealing with shadow DOM, and WebDriver BiDi. Together we will work through different scenarios and exercises for these topics.
Advanced tips&tricks and a peek into the future for SeleniumLibrary. We will cover a broad range of topics and we will have some exercises and examples for each topic. We will try to include both Firefox and Chrome/Chromium, where applicable. The topics include:
To participate you will need a laptop with local administrator / root privileges. We will be using the uv package manager. You should install a recent version of it, along with git, Chrome/Chromium, Firefox and their drivers. More details about prerequisites will be provided later.
Do your Robot Framework tests take too long to finish? Learn how to speed them up with Pabot! In this hands-on tutorial, you’ll discover how to use Pabot’s powerful parameters to control execution order, parallelism, and scheduling—so your test runs are faster and smarter. We’ll cover setup, configuration, and best practices for turning long test suites into efficient, scalable automation pipelines.
As your Robot Framework test suites grow, so does execution time. Slow feedback loops can hurt productivity and block continuous delivery. Pabot—an open-source parallel executor for Robot Framework—solves this problem by enabling safe, efficient parallel test execution.
In this interactive, hands-on workshop, we’ll dive deep into how to use Pabot effectively and intelligently.
What participants will learn:
We’ll also explore practical exercises demonstrating how changes in Pabot parameters affect runtime and execution behavior. Participants will learn how to tune performance dynamically for different environments (local vs. CI).
By the end of the workshop, attendees will be able to:
Format:
This session suits Robot Framework users, test automation engineers, and CI/CD practitioners who want to take their automation to the next level by combining speed, reliability, and control.
Key takeaways:
In this hands-on workshop, you'll fuse AI (RAG) with Robot Framework to help with API tests. We will build a system from scratch that reads OpenAPI docs to automatically generate RF code snippets and create intelligent test data for positive, negative, and edge cases. And API test is just an example. You can use the knowledge to build RAGs for any topic. This intermediate-level session is perfect for QA engineers and developers eager to innovate their testing process. Let's automate the automation!
Imagine an intelligent assistant that reads an OpenAPI specification and helps write your Robot Framework tests. In this intensive, hands-on workshop, you won't just imagine it—you'll build it from the ground up.
This workshop takes you on a practical journey to fuse the power of Retrieval-Augmented Generation (RAG) AI models with the reliability of Robot Framework. You will leave not just with theory, but with a working prototype and the skills to revolutionize your API testing workflow.
Your Step-by-Step Learning Journey:
Build the AI's Brain: The Knowledge Base You'll start by tackling the core problem: making technical documentation understandable to an AI. You will learn how to take a standard OpenAPI/Swagger file, intelligently break it down ("chunking"), and convert it into numerical representations ("embeddings"). You'll then store these in a vector database, creating a powerful, searchable knowledge base that forms the foundation of our system.
Master Prompt Engineering for Test Generation With a knowledge base in place, you will learn the art of "Prompt Engineering"—crafting precise instructions for the AI. You'll move beyond simple questions to designing sophisticated prompts that command the model to query your knowledge base and generate clean, syntactically correct, and ready-to-use Robot Framework code, complete with keywords from the RequestsLibrary.
Execute, Debug, and Refine A generated test is useless until it runs. You will take the AI-generated .robot files and execute them against a real API. This is where your testing expertise comes in. You'll learn to analyze the results, debug any issues using Robot Framework's detailed logs, and add your own critical assertions to ensure the tests are not just running, but are truly validating the API's functionality.
Generate Intelligent and Diverse Test Data Finally, you will push the boundaries by using the AI to automate another tedious task: test data creation. You will command the model to generate a wide array of JSON payloads based on the API schema—covering positive scenarios, negative cases (e.g., missing fields, incorrect data types), and critical edge cases. You will then learn how to integrate this data into data-driven tests using Robot Framework's Test Template feature.
By the end of this workshop, you will have built a complete, end-to-end system that automates the most time-consuming parts of API test creation, freeing you to focus on high-level test strategy and exploratory testing.
Hardware requirements: RAM: 16 GB Free disc space: 15 GB (SSD preferred)
The participants will have to install the following before the workshop: VSCode + RobotCode Python (3.9+) Robot Framework (7.2+) Ollama Llama 3 8B
As we open Robocon 2026 here in Helsinki, we’ll start by celebrating what makes this gathering unique — people. In an era where AI can generate, automate, and optimize almost everything, human connection and collaboration are more important than ever. This session sets the tone for the conference by exploring how curiosity, openness, and shared learning continue to shape the Robot Framework community and the wider testing world. Together, we’ll reflect on one key question: as AI grows smarter, how do we stay connected, collaborative, and human?
As we open Robocon 2026 here in Helsinki, we’ll start by celebrating what makes this gathering unique — people. In an era where AI can generate, automate, and optimize almost everything, human connection and collaboration are more important than ever. This session sets the tone for the conference by exploring how curiosity, openness, and shared learning continue to shape the Robot Framework community and the wider testing world. Together, we’ll reflect on one key question: as AI grows smarter, how do we stay connected, collaborative, and human?
RoboCon is more than just an yearly event - it is a spark for inspiration and development. It ignites ideas, encourages perspectives and creates space for both personal and professional growth. In this talk we share our story: how attending the conference lit that spark for us and how it led to be contributors in the community. We'll reflect on the impact of being involved and why your voice matters! Everyone has something valuable to offer and we'll explore how you can take part, no matter your background or experience.
Write plain English → Get executable Robot Framework tests
RF-MCP executes every step in real RF runtime before generating code. No hallucinated keywords - AI only uses keywords from your libraries.
Features:
For engineers wanting faster test creation without losing control over what gets generated.
Transform plain English test descriptions into working Robot Framework tests through actual execution, not simulation. RF-MCP executes every step in live Robot Framework runtime before generating code.
✅ Real Execution, No Hallucination
Unlike AI code generators, RF-MCP validates through actual execution:
🛠️ Comprehensive Tool Set
Planning & Orchestration:
analyze_scenario, recommend_libraries, manage_library_plugins
Execution:
execute_step, execute_flow, manage_session
Discovery:
find_keywords, get_keyword_info, get_session_state
Generation:
build_test_suite, run_test_suite
🔌 Debug Attach Bridge
Unique McpAttach library enables debugging live RF sessions. Connect to your IDE's debug session to reuse in-process variables and imports.
📚 Library Support
🎯 Key Features
Native RF Context: Persistent per-session Namespace + ExecutionContext with runner-first dispatch for correct argument parsing.
DOM Filtering: Three levels reduce AI token usage while preserving automation-relevant elements.
Semantic Matching: Understands intent - "click submit" maps to the right keyword.
Plugin System: Extend with custom libraries via entry points or manifest files.
Frontend Dashboard: Optional Django-based UI for monitoring sessions and activity.
📦 Installation
pip install rf-mcp # Core
pip install rf-mcp[web] # Browser/Selenium
pip install rf-mcp[mobile] # Appium
pip install rf-mcp[frontend] # Dashboard
🔧 Quick Setup
VS Code/Cline:
{
"servers": {
"robotmcp": {
"type": "stdio",
"command": "python",
"args": ["-m", "robotmcp.server"]
}
}
}
💡 Why RF-MCP?
Apache 2.0 licensed. Active development.
GitHub: github.com/manykarim/rf-mcp
Learn how I sped up the Robot Framework’s test suite to find ⅔ of the bugs in ¼ of the time!
We want to run our tests as often and early as possible, so we immediately notice when we break something. However, many teams can't test as often as they'd like because their tests take hours or even days to run.
Innovative testing methods can identify most errors with just a fraction of test execution time, thereby significantly accelerating our testing. I’ll show you how to use AI to find most bugs in a fraction of the test runtime. With this we can give feedback on new bugs much more frequently.
Learn how I used mutation testing to introduce hundreds of bugs into the Robot Framework’s own code and how I applied an AI-based testing approach to the Robot Framework’s test suite to find ⅔ of these bugs in ¼ of the time.
Running tests as often and as early as possible is the dream of many agile testers. Ideally, after every commit and on all branches, so that we immediately notice when we break something.
But what if my tests take hours or even days? For many, the dream of an accelerated testing process seems unattainable or at least impractical.
However, research shows a possible solution: One approach to providing quick feedback even with slow tests is to run a small subset that is fast enough. This is worthwhile if this subset finds a majority of the defects in a fraction of the time. For example, 80% of defects in 10% of the time it takes to execute all tests. We need innovative methods to accomplish this, but they also need to be practically feasible.
In this presentation, I introduce an approach that can be implemented with little effort in existing projects to uncover most defects with minimal testing effort and without changing anything about your tests!
The method uses large language models (AI) and clustering to create an effective smoke test suite. This can be used for arbitrary changes, to identify defects across the entire code base with minimal testing effort. Thus, providing quick feedback on new bugs.
I’ll present the fundamentals, explain how it works and show research results about the effectiveness of the technique.
Robot Framework is widely used for web/app testing—but what about aerospace? In this talk, I’ll show how we applied RF in satellite data-processing projects under strict traceability and standards. By linking DOORS requirements with test scripts and auto-generating deliverables, we ensure compliance, consistency, and full auditability. Learn how disciplined automation can thrive in regulated engineering domains and inspire new possibilities beyond conventional use cases. An Operational project (Galileo) will be presented.
In aerospace systems, the cost of failure is extremely high, and testing must adhere to rigorous traceability, standards, and documentation. In this session, we will:
Present the challenges of test automation in aerospace domains compared to typical web/app environments.
Introduce a methodology where Robot Framework (RF) sits at the center of test automation, integrated tightly with DOORS (or similar requirements management tools) using custom conversion scripts.
Show how bidirectional sync between requirement definitions, test procedures, and implementation keeps everything consistent and auditable.
Demonstrate supporting tools such as RobotCode for streamlined script development and RobotMetrics for enriched reporting and dashboards.
Provide real-world case studies from missions, where this setup proved scalable and robust. Specifically, the setup for Galileo Control Center will be explained:
Discuss the lessons learned, pitfalls, and recommendations for applying such an approach in regulated engineering and safety-critical industries.
Attendees will leave with concrete ideas for adopting Robot Framework beyond conventional use, and how to build automation ecosystems that respect standards, traceability, and disciplined software engineering practices.
Accelerate performance testing with Locust Script Generation and Execution via Robot Framework — an automated, keyword-driven approach to create, parameterize, and run scalable load tests. Seamlessly handle dynamic correlation, and generate detailed performance reports — empowering QA teams to validate APIs and user journeys with zero manual scripting and maximum reusability.
This solution empowers QA teams to streamline performance testing by integrating Locust with Robot Framework. Through a keyword-driven design, testers can define performance scenarios in simple, readable formats while the tool automatically generates Locust scripts, manages parameterization, and executes scalable load tests.
It supports dynamic correlation where the script writer specifies which field values (e.g., tokens, IDs) to extract and from which task they should be reused. Once defined, these correlations are dynamically handled at runtime, ensuring consistent and accurate test flows without hardcoded data.
By reducing manual effort in scripting while maintaining full control over test logic, this framework enhances reusability, maintainability, and CI/CD readiness. Built-in performance reports provide detailed insights into response times, concurrency, and bottlenecks — enabling QA teams to validate APIs and user journeys with efficiency and precision.
Studies reveal that over 90% of the top 1 million websites have detectable accessibility issues. Despite its importance, accessibility testing is often neglected. This is primarily because teams do not have enough bandwidth, budget, domain knowledge & technical skills to deliver an accessible product within available timeframes. This is where an automated approach towards accessibility can greatly help. This session will discuss how we addressed aforementioned challenges through Robot Framework & axe-core together & saved the day against all odds.
Despite its importance, accessibility testing is often neglected. Studies reveal that over 90% of the top 1 million websites have detectable accessibility issues. Users relying on assistive technologies frequently encounter barriers. Manual accessibility tests are slow, inconsistent, and require specialized knowledge. As projects become larger and release cycles shorten, teams struggle to test and resolve accessibility issues. This leads to rework costs increasing dramatically if accessibility defects are found late in the development lifecycle, and many teams lack the bandwidth, expertise, or budget to perform comprehensive accessibility testing manually. This gap leads to undetected accessibility issues, excluding users and creating rework late in the development process. However, Automation offers a practical, scalable solution. This session will walk through accessibility automation with Robot Framework in real time on a live website, showing how accessibility can be implemented throughout the development lifecycle. The demonstration will cover three aspects:
Integrating Accessibility Tools during Development phase
Before even writing automated tests, accessibility issues can be detected early during development using the Axe-Core plugin in Visual Studio, as well as Storybook integration for component level accessibility checks. The session will show how developers can spot violations such as missing alt text, or incorrect ARIA attributes while coding. This helps to prevent future rework of the code.
Integrating Accessibility Tools in Testing
Afterwards, the session will explain, how Robot Framework can automate accessibility testing by integrating tools like axe-core and Lighthouse. Accessibility checks are written directly into functional and regression test cases, which makes these checks become a part of daily testing without additional manual effort.
Accessibility in CI/CD Pipelines (Process Level)
Next, a live code example of triggering accessibility tests in a CI/CD pipeline will be shown. This includes a demonstration on how detected issues are tracked and linked to development tasks, ensuring continuous validation and preventing regressions before deployment.
The session will also present how Robot Framework generates clear, visual reports, categorizing accessibility issues by severity and providing recommendations for fixes, which help developers, testers, and designers to work efficiently and maintain accessibility compliance throughout the software lifecycle.
In conclusion, the true takeaway is that accessibility automation with Robot Framework is not just about finding violations or issues, it’s about creating a sustainable system where new technology actively supports diversity and usability.
This talk, is about using image recognition libraries to automate tasks or testing, when it is costly or difficult to obtain object identifiers in the applications under test. We will talk about the libraries SikuliLibrary and ImageHorizonLibrary to answer these questions: What, Why, Where, When, and How to use them?
Contents:
Image recognition libraries are used to automate tasks or testing, when it is costly or difficult to obtain object identifiers in the applications under test. These libraries use Computer Vision (OpenCV) to match reference images with a copy of the computer screen, and also Optical Character Recognition (OCR) for text extraction. With these techniques and operation system actions like mouse movement and keyboard strokes, the system can replicate the actions of the human user.
SikuliLibrary is a Robot Framework library that allows to use the SikuliX (http://sikulix.com/) Java API. It uses Robot Framework Remote to interface Python functions with the SikuliX Java libraries, so it needs to have Java Runtime Environment installed in your system. -- diagram from project: https://github.com/MarketSquare/robotframework-SikuliLibrary/blob/master/docs/img/architecture.png -- The usual workflow for a Test Case or Task is:
SikuliLibrary defines 78 keywords, which can be groupped as: Configuration, Actions (Mouse, Keyboard), Assertions and Verifications, and Information.
SikuliLibrary is operating system agnostic, but care must be taken regarding the reference images resolution, which needs to be consistent for reproducibility of tests or tasks executions.
ImageHorizonLibrary is a Robot Framework library, based on pyautogui and other Python modules, and optionally opencv-python for adjusting the image recognition precision. This library does not have Optical Character Recognition (OCR) keywords. Similarly to SikuliLibrary, it uses reference images to interact with the AUT on the screen. We can say that the usual workflow is the same as the one with SikuliLibrary, except for the server and OCR parts.
ImageHorizonLibrary defines 34 keywords, organized like in SikuliLibrary.
Like SikuliLibrary, the library ImageHorizonLibrary is operating system agnostic, and care must be taken regarding the reference images resolution, which needs to be consistent for reproducibility of tests or tasks executions. One advantage when comparing with SikuliLibrary, is not needing to have a Java Runtime Environment.
The future development of SikuliLibrary, is dependent of the advance of the original Java project SikuliX, because its maintainer has suspended its development for now. However, Raimund Hocke, https://github.com/RaiMan, has prepared an integration of SikuliX fully in Python, named sikulix4python. This would make the use of the library easier and universal. There is also the possibility to use keywords of ImageHorizonLibrary, because they complement SikuliLibrary, and the development of ImageHorizonLibrary is stale.
Like every open source tool Robot Framework is sometimes hard to fit in common corporate tool evaluations. There is no company behind it offering enterprise support. Even if governance for Robot Framework core may be achievable, the tool does not exist solely, but requires ecosystem projects that are entirely free.
In this talk, Markus is going to show ways how to mitigate the risks of adopting a free open source tool, how open source tool providers can improve governance of their prjects and how companies can contribute in different ways for their own direct benefit.
In recent years, customers of imbus have pushed to open source solutions. RoboSAPiens, KeyTA are tools that have been presented at past Robocons already. Those tools had been funded by 1 customer. A new approach is now that companies join in together funding specific ecosystem projects like PlatynUI or pay a developer to contribute robot framework features like custom settings.
In this talk I would like to show how to deal with a few challenges for open source projects to get through the procurment process of companies and that there are more ways than "sponsorship" and "spend free time" to support Robot Framework. This is supposed to be an inspirational speech showing additional ways for community peers that are willing to convince their employer to invest more in open source and probably need a few suggestions how to do it.
In the end, I would like to use the attention to promote a new open-source or governance workgroup collecting the expertise of the community, establish suggestions for robot framework and ecosystem projects.
Medusa is a tool to easily parallelize execution of test suites.
Medusa uses suite metadata to start suites in parallel dynamically while avoiding resource usage conflicts. Suites can be assigned to sequentially executed stages and can be run multiple times with different variables, even in parallel.
This talk will give you an overview of how Medusa works, how you can use it and how my employer INSYS icom uses it to save time and code for daily testing of the industrial routers we produce.
If you have a lot of tests that take a non-negligible amount of time, you can benefit greatly from running them in parallel. In the case of INSYS icom's daily regression tests, parallelization allows us to run more than 50 hours of sequential tests in less than 5 hours.
One big problem we encountered for our use case is that many of our test suites require exclusive access to specific resources, for example a specific device that is being tested or a specific port that is used while testing. This makes it impossible to simply run everything in parallel since there would be countless resource usage conflicts.
At first we used pabot with a manually written ordering file specifying which suites run in parallel in which order, but with more than 1000 tests across many suites, this quickly gets unmanageable and still takes a lot more time than it needs to. We attempted to automate generating the ordering file but finally had to concede that dynamically avoiding resource conflicts is just not what pabot was designed to do. To close this gap, we designed our own tool.
Medusa was designed specifically around the idea of resource dependencies. Resources can be anything from a device on the network (specified eg. as a hostname or IP address), a specific port that is bound in a suite or even a physical resource that is limited, such as a DSL connection that can't be used multiple times in parallel.
Every suite can declare resource dependencies, Medusa then automatically determines at runtime which suites can be started in parallel, which maximises time efficiency while preventing conflicts.
In addition to dependencies, a suite is also assigned to a specific stage. Stages are simply groups of suites which run sequentially, while all the suites within a stage are executed in parallel as described above. This allows you to still control ordering where necessary.
Finally, Medusa allows you to run suites multiple times with different variables, even with different dependencies or stages, making it an extremely flexible tool that also helps reduce code duplication in cases where you want an entire suite to be used to test multiple targets or in multiple variations.
Since everything is executed in separate processes, Medusa makes use of rebot to merge results of all suites at the end of execution. That way you still get seamlessly combined results even with massive parallelization.
To still allow full flexibility for using standard robot options, Medusa is designed like a wrapper that accepts nearly all of robot's options and simply forwards them to the processes running the single suites. This allows you to still use your own listeners, pre-run modifiers and more.
Medusa will be released as open source software ahead of RoboCon 2025 and I look forward to seeing how you will use it!
Automation, like pancakes, can be delightful… if you follow the right process. Join our live cooking session on stage where we'll bake two different recipes to get the same result: a delicious pancake. There is much to learn about automation from the process of making pancakes.
A psychologist and a musician enter a test automation conference. What happens next will make you "flip" out: they're gonna cook together.
Because there are different kind of learners in the world we will be presenting key concepts of automation: Libraries, Resources, Tests/Tasks, Reports, Single vs Parallel execution, Listeners, etc...
And using the art of making pancakes to demonstrate these concepts.
Kick off the second day with Robot Framework’s lead developer, Pekka Klärck, as he shares the latest core updates and upcoming plans for the framework. Get a glimpse into what’s new and what’s next in the world of Robot Framework.
Kick off the second day with Robot Framework’s lead developer, Pekka Klärck, as he shares the latest core updates and upcoming plans for the framework. Get a glimpse into what’s new and what’s next in the world of Robot Framework.
Connecting Robot Framework to other systems often requires extra effort. What if you could do it visually, with workflows that react to events like a change on a website, an IoT sensor alert, or a new customer record in your database? This talk introduces n8n-nodes-robotframework, an n8n community node that runs Robot Framework inside visual workflows. Use it to connect Robot Framework’s capabilities with prebuilt n8n nodes such as AI analysis, messaging, and database updates.
Automation is rarely an isolated activity. Tests, bots, and scripts deliver the most value when they interact with other tools and services.
n8n-nodes-robotframework enables Robot Framework tasks to run inside n8n’s visual workflows, so you can connect testing and RPA with hundreds of n8n nodes for APIs, databases, messaging, and AI.
Example workflow:
All of this is configured visually in n8n.
The session includes a live demonstration of Robot Framework running inside an n8n workflow. It uses a custom made n8n Docker image that includes Robot Framework and the Browser library for easy setup. Everything runs self-hosted, ensuring privacy and full control.
This talk explores AI-driven automation in Robot Framework through an intelligent Agent that enhances testing with capabilities like Agent.Do and Agent.Check. By leveraging large language models and visual understanding, the Agent interprets test intentions, interacts with GUI elements, and performs visual assertions. It also explores how this can lead toward more autonomous test execution, where the Agent can understand and carry out complete testing goals through another keyword dedicated to this purpose.
This session explores the integration of artificial intelligence into test automation through a novel AI Agent built on top of Robot Framework.
The agent introduces intent-level automation, allowing testers to describe what to test instead of how to test it.
With new intent-based keywords such as Agent.Do and Agent.Check, the framework interprets high-level testing goals, transforming them into concrete test actions and assertions in real time.
At its core, the agent combines the reasoning capabilities of Large Language Models (LLMs) with visual understanding models.
It can interpret a tester’s intent, identify and interact with GUI elements, and verify expected outcomes visually without relying on locator-based definitions.
This enables a more resilient, self-adaptive testing approach suitable for rapidly evolving user interfaces.
LLM Client Layer:
A modular interface supporting multiple LLMs to interpret and execute test intents while staying fully compatible with Robot Framework logs.
VLM (Visual Language Model):
Merges a vision model with an OCR to extract visual context, semantics, and element coordinates from screenshots.
The roadmap explores advancing the Agent toward higher levels of autonomy and adaptive decision-making in test execution.
This session includes a live demonstration of the current prototype, focusing on:
The session will show concrete results and a live demo of the current prototypes, emphasizing reproducibility, measurable gains, and practical AI-in-testing outcomes.
GenAI tools are changing the world of software development. Many enterprises report more code being generated with GenAI tools than not. As Robot Framework users, we are a part of the global software ecosystem, and the ecosystem is changing. What is our and the framework's future role in the world of increasing prompted development, and how does it affect our learning? I delve into how we learn programming, how learning and doing are starting to diverge in modern software development, and the implications of all of these.
I am Arttu Taipale, an Automation Developer working in both RPA and Test Automation projects at Knowit Solutions. I use Robot Framework and Python as my main tools in my daily work. For four years now, I have delivered Robot Framework trainings, and with the arrival of GenAI tools I have noticed a shift in the learning process. The main drive for learning software development is to create software, but we're shifting to a world where knowing the fundamentals is no longer a requirement to produce software. That doesn't imply that knowledge of software is becoming redundant - quite the contrary. Currently we face a problem: we try to teach multiplication to students holding calculators. How will software development education proceed from this?
Large enterprises of the IT-world are declaring an age of AI and code-by-prompt. The degree of this is yet unclear, but the change is imminent. Open source tools have for a long time changed as per the needs of their users, but with the assumption that code is written by people.
Welcome to my talk where I discuss the future of learning in software development, as well as the implications LLM's might have in the long term directions of software tool development.
Join a lively discussion on current and emerging topics shaping the Robot Framework ecosystem. Panelists TBA.
Join a lively discussion on current and emerging topics shaping the Robot Framework ecosystem. Panelists TBA.
PlatynUI is an open-source Robot Framework library that makes desktop UI automation feel consistent on Windows, Linux, and macOS. The talk introduces what PlatynUI is, why it was created, and the ideas behind it—portability across desktops, readable tests, and habits that reduce flakiness. We’ll walk through a compact demo using the library’s tooling to explore and interact with applications and outline how to try PlatynUI in existing suites without disruption.
coming soon....
When I first started with Robot Framework, I had no idea what I was doing. Over the years, those messy experiments turned into a career built on curiosity, mistakes and learning. In this talk, I’ll share how that journey shaped me. From early tests to mentoring others and what I’ve learned about growth, persistence and how boredom and failure can lead to better work.
How do you go from knowing nothing about test automation to becoming a senior and later a lead with Robot Framework? And what can that journey teach about quality, learning and growth?
In this talk, I will share how one small step into Robot Framework grew into a five-year career shaped by curiosity, mistakes and persistence. From building my first unstructured UI tests to creating and improving larger automation suites, refactoring broken setups and mentoring others. Every phase has taught me something new, both technically and personally.
We will look at how understanding comes from doing, how getting bored is an important part of evolving and how failures turn into better practices. New environments and challenges have a way of changing how we see quality and I will share what that’s looked like for me.
Attendees will leave with real examples, lessons learned and practical ideas to strengthen their own work with Robot Framework.
One phase, one lesson and one improvement at a time.
Join us for the RoboCon finale with prizes, appreciation, and a big thank-you to everyone who made this year’s event a success.
Join us for the RoboCon finale with prizes, appreciation, and a big thank-you to everyone who made this year’s event a success.
Join a lively discussion on current and emerging topics shaping the Robot Framework ecosystem. Panelists TBA.
Join a lively discussion on current and emerging topics shaping the Robot Framework ecosystem. Panelists TBA.
Community Day is a free “unconference” in a classroom setting, where attendees shape the agenda at the start of the day. It’s the perfect chance to meet people before the main conference, build on the workshops from the day before, and connect with top experts from around the globe.
Whether you want to work on hands-on projects, join deep discussions, or simply get your questions answered, this is the place.
Free for all ticket holders, with limited spots —enrollment required. Starts 09:00, location TBA.