RoboCon Online

          
Agenda & Talks

Workshops

3 Mar

Start your RoboCon week with live, instructor-led workshops focused on hands-on learning.

In small, interactive groups, you’ll dive into real-world Robot Framework exercises and get direct guidance from expert facilitators. Sessions are either 4 h or 8 h long and sold separately from the main conference tickets.

The joining link will be shared privately with participants.

Main Conference

4-5 Mar

Two full days of inspiring talks from speakers around the globe, covering the entire Robot Framework universe. After each live-streamed talk, our host Joe Colantonio will lead a live Q&A with the speaker.

This isn’t a typical online conference — it’s in Gather.Town, a virtual world where you can walk around, meet peers, and spark spontaneous conversations.

Tutorials

6 Mar

Our live, two-hour tutorials give you a practical, step-by-step walkthrough of a specific topic. They are interactive and online, so you can follow along, ask questions, and make use of shared resources (often a GH repo or similar).

-> Join Live Stream <-

Watch Parties

4-5 Mar

Gather locally in Watch Parties and enjoy the RoboCon talks together. Depending on the setup, this might include a hands-on workshop, a tutorial, or just casual drinks and food. Some parties may run in the morning or evening, before or after the main program.

Community Time

5 Mar

Join us in RoboCon Online Space (Gather.Town) after the first conference day for some relaxed community fun. Meet with your avatar, chat about Robot Framework (or anything else), and enjoy games and activities together.

Join Here 👇

Join RoboCon Online in Gather

Community Day (Free)

6 Mar

This is your day to set the agenda. Community Day is a free “unconference,” where attendees propose and vote on topics at the start. It’s a vibrant, hands-on space for sharing ideas, learning, and getting direct help from experts in the ecosystem. Everyone is welcome!

Join RoboCon Online in Gather

Workshops

Integrating AI & Robot Framework
🔗
Mar 03, 08:00 AM (UTC) | 7 hrs
By David Fogl

Learn how to integrate AI models like ChatGPT and Gemini with Robot Framework. Build Python libraries, generate test data and documentation, and explore generative AI for smarter test automation. Hands-on, practical, and focused on real-world testing workflows.

Workshop Goal:
This workshop will show you how to integrate generative AI (OpenAI and Gemini) with Robot Framework. Together we will create a simple Python library, connect it with Robot Framework, and explore how LLMs can boost your testing workflows.

Key Objectives:

  • Learn how to connect Robot Framework with modern AI models
  • Generate test documentation and keyword descriptions automatically
  • Create dynamic test data on the fly
  • Experiment with working files as context for AI
  • Understand how to extend Robot Framework with custom Python libraries

Knowledge Level:
Basic Python coding skills and prior experience using Robot Framework. You should be comfortable writing simple functions in Python and running Robot Framework tests. This workshop is aimed at intermediate automation testers and developers who want to explore AI-powered testing.

Workshop Agenda (09:00–17:00):

  1. 09:00 – 09:30: Introduction – AI in testing, OpenAI and Gemini models, integration options
  2. 09:30 – 11:00: Building a Python Library – Live coding session to create a simple AI-powered library
  3. 11:00 – 12:00: Integrating with Robot Framework – Connecting the library to Robot Framework and running tests
  4. 12:00 – 13:00: Lunch Break
  5. 13:00 – 14:30: Working with Generative AI – Generating test data, documentation, and test cases dynamically
  6. 14:30 – 16:00: Hands-on Exercises – Extending the examples and experimenting with custom ideas
  7. 16:00 – 17:00: Wrap-up & Q&A – Best practices, limitations, and discussion

Preparation and Technical Requirements:

  • Installed Python 3.10+
  • Installed Robot Framework
  • Basic IDE or editor (PyCharm, VSCode, etc.)
  • API keys for OpenAI and Gemini (will be provided by us for the workshop day)
David Fogl
Beyond Static: Demystifying Dynamic and Hybrid Libraries in Robot Framework
🔗
Mar 03, 08:00 AM (UTC) | 7 hrs
By Michael Hallik, Henrik Schütte

Unlock the power of dynamic and hybrid libraries in Robot Framework!

Have you ever struggled with maintaining multiple test libraries or exposing modular Python code cleanly to Robot Framework? The Robot Framework library API provides the solution!

This workshop teaches you how the dynamic library API works and shows you when and how to use it for cleaner, more scalable test automation.

You'll build a unified test library from modular Python components, learn how keywords are exposed at runtime and gain practical skills for structuring maintainable RF test frameworks.

Description

Unlock the power of Robot Framework’s lesser-known dynamic and hybrid library APIs.

As automated testing scales across large, integrated systems, test frameworks must strike a balance between clean usability for testers and modular maintainability for developers. In this workshop, we’ll explore how Robot Framework’s dynamic library API enables just that.

Imagine testing dozens of RESTful services: user, orders, payments and many, many more. You also need to deal with authentication & authorization. With static libraries, you'd need a tangle of imports and exposed implementation details in every test suite. Instead, we’ll show how to build a single, unified RF library that dynamically loads service-specific Python modules at runtime. This allows test authors to use concise, readable test cases like Create New User or Verify Payment Received through one import, regardless of where the logic lives under the hood.

You’ll learn when and why to choose dynamic (or hybrid) libraries over static ones, how to implement them cleanly, and how this approach empowers cross-functional teams to scale their test automation without creating a mess.

This workshop is designed for test engineers and automation developers who want to move beyond simple static libraries and build flexible, scalable, and maintainable keyword libraries. Through a hands-on example based on REST API testing, you’ll learn how dynamic libraries work under the hood, when and why to use them, and how to structure Python-based libraries that expose clean, unified interfaces to test authors.

Key Takeaways

  • Understand the differences between static, dynamic, and hybrid libraries in Robot Framework.
  • Learn when and why to use dynamic or hybrid libraries in real-world test automation projects.
  • See how Robot Framework discovers and executes keywords at runtime through the dynamic and hybrid API.
  • Build maintainable dynamic and hybrid libraries that pulls together multiple Python modules under one import.
  • Write clean, high-level test cases while keeping the complexity hidden in well-structured code.

Knowledge Level

  • Robot Framework: Intermediate (understanding of test case syntax, keyword usage, and test suite structure)
  • Python: Intermediate (modules, classes, methods, basic Python syntax)
  • HTTP/REST APIs: Basic understanding of REST concepts and HTTP verbs (GET, POST, PUT, DELETE)
  • Python Requests Library: Preferable, but not required (we will walk through usage in examples)

Agenda

1. Introduction

  • 1.1 Why RF scripting isn't always enough
  • 1.2 Introduction to the RF library APIs: Static vs. Dynamic vs. Hybrid
  • 1.3 Taking a peek under the hood

2. Use Case: REST API Testing Framework

  • 2.1 Overview of the demo application (multi-service REST API)

3. Static Library

  • 3.1 Build the library
  • 3.2 Using the Library in Tests
  • 3.3 Limitations of the static approach

...

Michael Hallik
Michael is a test automation specialist with over two decades of hands-on experience in software testing, with a focus on Python-based automation and Robot Framework. He has worked in a variety of roles across industries, helping teams improve the structure and maintainability of their test automation. Michael is continuously working to improve his technical skills in order to build well-designed, future-proof test automation solutions. He currently works at Cistec and is the author of the Robot Framework XmlValidator test library.
Henrik Schütte
Henrik Schütte is a Senior Software Quality Engineer at imbus, specialized in Robot Framework and web automation as trainer and developer. Since 2021, he developed deep expertise in Robot Framework API development and led the web automation team at imbus TestBench. As an active contributor to the Robot Framework community, Henrik maintains open-source projects and regularly conducts training sessions on keyword-driven testing and automation with Robot Framework. He is recognized for his in-depth knowledge and hands-on experience in web automation and Robot Framework APIs.

Talks

Pre-Show
Mar 04, 06:50 AM (UTC) | 10 min
RoboCon Online Opening & Robot Framework Updates
🔗
Mar 04, 07:00 AM (UTC) | 1 hrs
By René Rohner, Miikka Solmela, Pekka Klärck

Kick off RoboCon Online with Miikka and René sharing Foundation updates and community highlights, followed by Pekka Klärck presenting the latest developments and roadmap for Robot Framework.

Kick off RoboCon Online with Miikka and René sharing Foundation updates and community highlights, followed by Pekka Klärck presenting the latest developments and roadmap for Robot Framework.

René Rohner
René Rohner brings a wealth of experience in test automation to the forefront of the software testing community. As the Chairman of the Robot Framework Foundation and a key developer of several projects within the Robot Framework ecosystem, including the innovative "Robot Framework Browser," René is dedicated to improving testing practices and methodologies. With a background in consulting and training, his work focuses on practical solutions and tools that address the real-world challenges faced by testers and engineers today. René is also an author and a Principal Consultant at imbus AG in Germany, where he continues to contribute to the field of test automation through his expertise and passion for teaching.
Miikka Solmela
Pekka Klärck
Pekka Klärck is the inventor and lead developer of Robot Framework. He started the project in 2005 as part of his master’s thesis at Helsinki University of Technology (now Aalto University) and has been steering its development ever since. Pekka is known not only for his technical expertise but also for his dedication to fostering an open-source community. He actively collaborates with contributors worldwide and regularly shares his insights at conferences and events.
Break
Mar 04, 08:00 AM (UTC) | 30 min
Case Study: AI-Enhanced Test Automation Solution for a Major Bank Using Robot Framework
🔗
Mar 04, 08:30 AM (UTC) | 30 min
By Yibo Wang, Hazem Khaled

For a major German bank, we automated SAP testing using Robot Framework. The solution verifies data mapping, interfaces, data initialization, and regulatory reports. Test cases run in CI/CD pipelines, with results synced to Jira/Xray. As part of QA, the automation validates test artifacts and generates reports via Jira Structure. In addition, Generative AI supports both test automation and QA, including a pull request analyzer that aligns PRs with Jira stories, ensuring traceable, maintainable, and auditable testing across SAP environments.

For a major bank in Germany, we built a comprehensive test automation solution for SAP landscapes using Robot Framework — designed to enhance consistency, traceability, and speed in quality assurance within a highly regulated banking environment.

The solution automates end-to-end SAP workflows, including:

  • Data mapping validation across applications
  • Inbound and outbound interface checks
  • Verification of data initialization
  • Validation of regulatory reports

Existing test cases are automated with Robot Framework and integrated into a CI/CD pipeline. Test results are automatically synchronized with Jira/Xray test executions via the Jira API, ensuring full traceability across requirements, tests, and defects.

The automation also performs formal validation of test artefacts in Jira (test cases, test plans, and test executions) and generates comprehensive reports using Jira Structure.

We also leverage Generative AI to support and enhance the test automation process. A key differentiator is our AI-driven quality assurance: a business-centric Pull Request Analyzer verifies whether pull requests align with corresponding Jira user stories, ensuring functional accuracy and completeness.

AI-based checks further validate both manual and automated test scripts — evaluating their structure, completeness, and clarity against internal quality standards and coding guidelines. This ensures high-quality documentation, easier maintenance, and more reliable test implementation.

In this talk, we will share our journey of integrating Robot Framework with SAP testing, CI/CD, Jira/Xray, and AI-based quality assurance. Attendees will learn how to scale Robot Framework in enterprise SAP environments and how AI can elevate both automation and documentation quality.

Yibo Wang
Yibo Wang is a Senior Manager for Quality Engineering at Accenture and leads the Practice Groups “Cloud Testing” and “Security Testing.” With over 15 years of experience in test management, test automation, and software architecture, he develops modern testing strategies and integrates AI solutions into complex projects. His credo: sustainable quality is achieved only through the combination of technology, methodology, and teamwork.
Hazem Khaled
I'm a Test Automation Engineer at Accenture with 7 years of experience in quality assurance. I focus on building scalable, end-to-end automation frameworks across Web, API, Desktop, and Databases. A key area of my expertise is validating complex enterprise systems, including SAP, and integrating robust test suites directly into CI/CD pipelines to enable fast, reliable delivery for clients in the Banking, Automotive, and Public Sectors.
Back In To Queue With robotframework-ibmmq
🔗
Mar 04, 09:00 AM (UTC) | 30 min
By Elout van Leeuwen, Niels Janssen

IBM MQ is the backbone of asynchronous communication in today’s complex microservice landscapes, trusted by governments and enterprises for mission-critical reliability. Yet Robot Framework ecosystem lacked native IBM MQ support, forcing testers into fragile workarounds far from production reality. That’s why we built robotframework-ibmmq: a IBMMQ-powered wrapper enabling seamless, production-like MQ interaction in automated tests. In this talk, we’ll share our journey, challenges, and how robotframework-ibmmq takes test automation to the next level.

Hi, our names are Niels Janssen and Elout van Leeuwen. We’re test automation engineers and for the past year we worked for the Employee Insurance Agency (UWV) in The Nederlands. UWV is known for it’s microservice landscape. These microservices mostly interact with the help of IBM MQ (message queues). A message queue is essentially a mailbox. A simple example could be that one application puts something in the mailbox, while another application can get things out of that mailbox.

The advantage of using IBM MQ is that not both services need to be up and running all the time to communicate (asynchronous). Instead, we use the message queue. Now one application can put a message on the queue and can go offline, while the other application can get the message from the queue whenever it goes online. There are way more cases and variations to use message queues, this is just a simple explanation to be able to grasp the concept.

IBM MQ is still one of the most used message queue middleware applications to date within government corporations. While working with message queues at my current client, we noticed a lack of support for automating message queues with Robot Framework. Because of this shortcoming, we saw workarounds to simulate queues, for example by forcing services to use a windows directory as the “queue” and placing “messages” into that folder. Other services were configured to read from this directory instead of a real queue, and tests/assertions were done on the files within these directories.

But offcourse this is not ideal because within testing we should always strive towards a test environment that is as close to the production environment as possible, as discussed in the TMAP literature. In our search for a suitable solution we did stumble apon a Python package called ‘pymqi’ with which automating of IBM MQ is possible, but this was not yet properly integrated into the robot framework ecosystem. This is why we created the robotframework-ibmmq. robotframework-ibmmq acts as a wrapper for ibmmq and makes interacting with queues possible directly from within Robot Framework.

Elout van Leeuwen
Elout van Leeuwen is a RFCP certified Test Automation specialist, trainer and manager with years of Robot Framework experience. He has a strong focus on scalable automation strategies. Elout brings a unique blend of technical precision and human connection to every project and presentation. He is a board member of the Foundation and an ambassador for Robot Framework.
Niels Janssen
Niels Janssen is a Test Automation Engineer with experience in building and maintaining test automation frameworks using Robot Framework and Python. He has worked on backend (APIs, databases, IBM MQ) and frontend automation, created custom libraries, and integrated solutions into CI/CD pipelines with Azure DevOps. Niels is also an experienced trainer in Robot Framework and passionate about keeping test automation simple and effective, following his KISS mentality: Keep It Simple, Stupid!
Break
Mar 04, 09:30 AM (UTC) | 30 min
AI-Powered Bug Classification and Creation from Robot Framework Test Reports
🔗
Mar 04, 10:00 AM (UTC) | 30 min
By Mohamed Sedky, Rwan Al-Halwan

Discover how AI and Large Language Models (LLMs) can revolutionize software quality assurance by transforming Robot Framework test reports into actionable bug insights. This talk introduces an automated pipeline that classifies, summarizes, and creates bug tickets directly from Robot Framework results — integrating seamlessly with tools like TFS and Jira. Attendees will learn how to bridge testing and defect management intelligently.

Modern QA teams generate thousands of Robot Framework test logs and reports, but extracting meaningful insights from them — especially identifying and documenting bugs — remains a manual and time-consuming process.

This session presents a novel AI-driven Bug Clarification and Creation framework, leveraging Large Language Models (LLMs) to automatically interpret Robot Framework outputs and turn them into structured bug reports.

Key topics covered:

Parsing and enriching Robot Framework test results with metadata (suite, test, logs, screenshots).

Using LLMs to analyze failure patterns and generate human-readable bug summaries.

Intelligent bug classification: functional vs. performance vs. environment issues.

Automated bug creation: seamlessly pushing reports to TFS, Jira, or any modern ALM tool via APIs.

Integration patterns and architecture design for hybrid setups (on-prem or cloud).

Real-world demo: converting a Robot Framework test log into a detailed, ready-to-triage bug ticket.

Takeaways:

Learn how to connect Robot Framework’s structured outputs with LLM reasoning.

See practical steps to automate defect triage and documentation.

Understand how this approach reduces human effort, increases accuracy, and accelerates release cycles.

This talk is ideal for QA engineers, automation leads, and AI enthusiasts seeking to bridge the gap between test automation and intelligent defect management.

Mohamed Sedky
Full-Stack Lead Software Engineer in Test with 13 years of experience helping Software Engineers in Test with Promoting Automation as Culture. Specializing in Robot Framework AppiumLibrary, RequestsLibrary, and AI Solutions using RF language, Mohamed uses that experience to enhance and spread the usage of Robot Framework Language and enable its Libraries to have better and easier usage
Rwan Al-Halwan
AI and Backend Engineer specializing in designing intelligent automation ecosystems that merge backend architecture with AI and large language models (LLMs). Experienced in building self-learning QA platforms capable of automated test generation, root-cause analysis, and dynamic reporting pipelines.
RFSwarm Update
🔗
Mar 04, 10:30 AM (UTC) | 30 min
By Dave Amies, Arkadiusz Kuczyński

An update on what's been happening with RFSwarm since Robocon 2024, and where we are headed with RFSwarm.

What's new with RFSwarm

  • New features that have been added
  • Contributions to RFSwarm by NiceProject and introduce Arkadiusz, he will give a short talk about his contributions and the benefits of contributing to robot framework ecosystem projects
  • RFSwarm tutorial videos
  • RFSwarm LinkedIn group Where we are headed with RFSwarm.
  • Planned features
  • More tutorial videos
Dave Amies
I am a performance testing professional who created RFSwarm to reduce the duplication of test development in test automation between functional and performance testing.
Arkadiusz Kuczyński
Student at Wroclaw University of Science and Technology. QA Intern at NiceProject.
Break
Mar 04, 11:00 AM (UTC) | 30 min
Robot Framework SchemathesisLibrary, what it is for and why I did it?
🔗
Mar 04, 11:30 AM (UTC) | 30 min
By Tatu Aalto

I did build a new library for ease testing REST interfaces which have OpenAPI schema. This talk points out how that library works and how good tools we have already in the Robot Framework ecosystem to help library developers. I also want to highlight what was my personal motivation to build yet another library for users and for me to maintain as developer.

Robot Framework SchemathesisLibrary, what it is for and why I did it? RoboCon 2025, there was many talks which pointed me to look at Schemathesis project direction. After some reading and trying the Schemathesis out with some dummy projects, I tough that Schemathesis project looks really interesting. I tough this because Schemathesis promises to automatically generate thousands of test cases from OpenAPI or GraphQL schema and finds edge cases that break your API. It also nicely fits for me, because I have blank spot in API testing. Although I am familiar with APIs and have done some API testing in the past, I am not very proficient with OpenAPI schemas.

When I did start creating SchemathesisLibrary, I did set out few goals for me. First I should learn how to build a REST service with modern Python tools and how using OpenAPI schemas enables automatic test case generation. Secondly this project should give me better backgorund when talking at work about building RESt services and why doing an OpenAPI schema is an good idea.

Did I achieve all my goals, well to be honest, only partially. But along the way building the SchemathesisLibrary and discovering features from Schemathesis, Robot Framework, DataDriver and many other things. So although I did not reach all my goals, I along the way I did discover new paths to discover and learn. In conclusion, the project can be considered successful from my perspective and I hope that it is also usefull for the community.

Tatu Aalto
Tatu Aalto has been doing testing for 25 years and have been part of the Robot Framework ecosystem for 15 years. He maintains several projects in Robot Framework world, mainly focusing on UI automation and library development. He is currently the lead developer of Browser library and is working on a new SchemathesisLibrary. He currently works at OP where he builds testing capabilities as service.
From 7 Tools to One: How Robot Framework United Automation Across a Complex Enterprise
🔗
Mar 04, 12:00 PM (UTC) | 30 min
By Haziz CISSE

As head of QA, I introduced Robot Framework as a single automation platform to replace six tools used across multiple departments. Without any API, I integrated it with ALM-QC, built a full ecosystem for data generation, functional automation, and end-to-end testing. Over 80 users are now trained and automated hundreds of tests. We build a one-click installer that sets up Robot Framework and all libraries to ease installation for users. Beyond the technology, this initiative created a unified QA culture and made automation accessible to everyone. Now some people want to use it for RPA purpose.

Haziz CISSE
As an expert in transforming quality practices, I assist IT departements in structuring their testing strategy, automating tests, and integrating QA into Agile teams. With over 15 years of experience, I have led rationalization initiatives, designed automation frameworks, trained over 100 QA professionals, and brought the vision of quality to top management. My approach is pragmatic, educational, and resolutely results-oriented.
Break
Mar 04, 12:30 PM (UTC) | 30 min
Automating Map Operations and Testing in QGIS with Robot Framework
🔗
Mar 04, 01:00 PM (UTC) | 30 min
By Michal Pilarski

Automation in Geographic Information Systems (GIS) is vital for reliability and efficiency in spatial data processing and testing. This paper introduces a framework for automating open-source QGIS UI operations using Robot Framework, PyAutoGUI, and PyWinAuto. It enables automated map interactions - creating, editing, and validating spatial features - through reusable, readable test/task keywords. The approach streamlines testing, reduces manual effort, and improves reliability in geospatial workflows. Please check: QGISLibrary (https://pypi.org/project/QGISLibrary/)

Automation in Geographic Information Systems (GIS) is increasingly essential for ensuring consistency, reliability, and efficiency in spatial data processing and map-based software testing. This paper presents a comprehensive approach to automating user interface (UI) operations and tests within the very top open-source QGIS (Quantum GIS) using the Robot Framework. The focus is placed on automating map interactions, such as creating, editing, and validating spatial features - including points, lines, and polygons - directly within the QGIS Desktop graphical environment. The proposed automation framework integrates QGIS locators (Qt5, Qt6) as UI objects, PyWinAuto and PyAutoGUI python libraries to automate UI operations, and Robot Framework for design, execute and report tests. By combining above technologies, it allows to automate workflows that typically require extensive manual effort, such as digitizing vector layers, snapping features, setting symbology, and performing topological validation. Through the Robot Framework’s structured and modular test design, each QGIS UI action - like drawing geometries (example: drawing river as a line on map canvas) - can be expressed as reusable, human-readable test keywords. These keywords abstract low-level operations, enabling QGIS analysts or Geographers to build complex automated scenarios without deep programming expertise. Overall, this work contributes to the field of geospatial software engineering by providing a replicable strategy for automating tests of spatial UI workflows, especially for Plugins in open-source GIS platforms. It highlights how the Robot Framework streamlines quality assurance processes, accelerate development cycles, and enhance the reliability of spatial data operations. The result is a powerful, flexible testing solution that empowers GIS professionals and developers to ensure that map creation, editing, and analysis tools function correctly across diverse environments and datasets - without the need for repetitive manual validation.

Michal Pilarski
During his career, Michal has been always connected with geospatial data and GIS geoprocessing. He likes to find and overcome challenges in Testing Big Data with geometry attributes. He has experience in preparing the testing strategies for ETL systems that extract, transform and load massive geospatial data. His technology stack is related to Python, Pytest, ArcGIS, QGIS, FME, Robot Framework, HP ALM, QTest, and Geopandas. Additionally he teaches young students Python coding in Minecraft.
RoboView
🔗
Mar 04, 01:30 PM (UTC) | 30 min
By Marc David Sutjipto, Julian Blanke

Test automation with Robot Framework has become an integral part of many projects. Over time, these test automations grow, and keeping track of the numerous created keywords and file structures becomes increasingly unwieldy. To counter this, the RoboView tool has been developed with the goal of improving keyword management and providing deeper insights into one's projects to support refactoring.

Since keywords are the fundamental building blocks of tests, RoboView specifically concentrates on them. The objective of this approach is to provide users with a clear and organized display. Both tabular representations and visual views in the form of graphs are utilized, allowing users to quickly gain an overview and then conduct more detailed investigations at a granular level.

The tool will be offered as a VSCode extension, as the Robot Framework community predominantly uses extensions in this format. This approach enables us to reach the majority of users for RoboView. Additionally, providing it as a VSCode extension allows for straightforward installation and usage of our tool.

Marc David Sutjipto
Marc studies Business Information Systems at the University of Münster and works at viadee Unternehmensberatung AG. In the AI & Test Automation team, he focuses on combining natural language processing (NLP) with test automation.
Julian Blanke
Julian is also part of the AI & Test Automation team at viadee Unternehmensberatung AG. He focuses on Robot Framework test automation and develops tools around Robot Framework, including both classic solutions and AI supported extensions.
Break
Mar 04, 02:00 PM (UTC) | 30 min
Panel Discussion
🔗
Mar 04, 02:30 PM (UTC) | 1 hrs
By René Rohner, Tatu Aalto, Many Kasiriha, David Fogl

Join the live-streamed panel discussion hosted by Joe Colantonio!

The main topic is Robot Framework and AI, but the audience may engage throughout the session and ask questions to the panel.

Join the live-streamed panel discussion hosted by Joe Colantonio!

The main topic is Robot Framework and AI, but the audience may engage throughout the session and ask questions to the panel.

René Rohner
Tatu Aalto
Tatu Aalto has been doing testing for 25 years and have been part of the Robot Framework ecosystem for 15 years. He maintains several projects in Robot Framework world, mainly focusing on UI automation and library development. He is currently the lead developer of Browser library and is working on a new SchemathesisLibrary. He currently works at OP where he builds testing capabilities as service.
Many Kasiriha
Many Kasiriha is a QA Engineer at Schenker AG with 16+ years in testing and a board member of the Robot Framework Foundation since 2022. He specializes in test automation training and maintains open source Robot Framework libraries. Based in Düsseldorf, he's a speaker at RoboCon and a father who can't switch off his testing mindset.
David Fogl
Second Day Opening
Mar 05, 04:00 PM (UTC) | 15 min
Database Library Update
🔗
Mar 05, 04:15 PM (UTC) | 15 min
By Andre Mochinin

The Database Library has got multiple releases in the last 2 years with quite a lot of changes - the talk includes on overview and details on the most important ones.

Andre Mochinin
Test automation architect, consultant and trainer. Current maintainer and developer of the [Database Library](https://github.com/MarketSquare/Robotframework-Database-Library).
What’s New in RobotDashboard: Smarter Insights, Improved Interfaces, Enhanced Usability
🔗
Mar 05, 04:30 PM (UTC) | 15 min
By Tim de Groot

Over the past year, RobotDashboard has evolved from a simple visualization tool into a mature open-source project. In this session, I’ll share lessons learned from building and maintaining this tool, highlight new features like custom database integrations, built-in server capabilities, customizable layouts, additional pages, and an improved interface and performance. I will also demonstrate how these improvements enable truly data-driven insights such as spotting flaky tests, identifying long-running suites, and detecting regressions earlier.

Over the past year, RobotDashboard has evolved from a simple visualization tool into a mature open-source project that helps teams turn Robot Framework test results into actionable insights. In this session, I will share lessons learned from building, maintaining, and growing RobotDashboard. This includes challenges faced when supporting multiple Robot Framework versions, incorporating community feedback, and deciding which features to implement and prioritize. These experiences offer valuable insights into maintaining an open-source project, balancing user needs with technical constraints, and ensuring long-term usability and adoption.

I will also highlight the new features that make RobotDashboard more powerful and flexible than ever. These include custom database integrations, which let teams store test results in a way that fits their infrastructure; built-in server capabilities, enabling real-time access to both the database and the dashboard; customizable layouts, allowing teams to tailor the dashboard to their needs; and an improved interface, providing faster and more intuitive navigation of complex test results.

The session will also show how these enhancements translate into deeper, data-driven testing insights. Attendees will see how RobotDashboard can help spot flaky tests, identify long-running suites, detect regressions earlier, and analyze trends across multiple test runs. By combining historical data with the new features, teams can move from simply reporting test outcomes to understanding patterns and making better testing decisions.

Through practical demonstrations, real-world examples, and lessons learned from maintaining an open-source tool, this talk will provide attendees with both inspiration and actionable takeaways for improving their testing workflows. You will leave with a clear understanding of how to extract more value from your test results using RobotDashboard.

Tim de Groot
Tim de Groot is a test automation engineer with over five years of experience, currently working at TestCoders in the Netherlands. He has extensive hands-on experience with multiple programming languages and testing tools, including Python, JavaScript, Java, Playwright, Cypress, Selenium, and Robot Framework. While he has worked with many tools, Robot Framework holds a special place in his work, both for its versatility in test automation and for its strong, supportive open-source community. For the past 1.5 years, Tim has been developing and maintaining RobotDashboard, an open-source tool that transforms Robot Framework test results into actionable insights. Passionate about the Robot Framework ecosystem and open-source software, he actively engages with the community and encourages anyone with questions or ideas about Robot Framework or RobotDashboard to reach out via GitHub or the Robot Framework Slack!
Robocop: Funded Improvements and What's New
🔗
Mar 05, 04:45 PM (UTC) | 15 min
By Bartłomiej Hirsz

An overview of the recent funded development work on Robocop. The session highlights key improvements, ecosystem integrations, and what this means for Robot Framework users.

An overview of the recent funded development work on Robocop and Robotidy. The session highlights key improvements, ecosystem integrations, and what this means for Robot Framework users.

Bartłomiej Hirsz
QA and DevOps enthusiast who loves the idea of open source community. Developer of Robot Framework tools such as Robocop or Robotidy.
Break
Mar 05, 05:00 PM (UTC) | 30 min
RoboMonX == Robot Framework Test Status Monitoring for Xray
🔗
Mar 05, 05:30 PM (UTC) | 30 min
By Ivaylo Brüssow, Andrej Nod

RoboMonX is shaking up test automation and how it's documented: With a real-time connection between Robot Framework & Xray for Jira, test results are sent incrementally. This gives you instant transparency, early error detection, and more efficient decisions in the development process. You only have one place to be and no more annoying media breaks.

In modern software development, the integration of test automation into test management is a key success factor for quality assurance acceptance. In this talk we will present RoboMonX: a novel solution for the dynamic linking of test results from the Robot Framework with the test management tool Xray for Jira.

In contrast to conventional approaches, which require the results to be transferred at the end of the test execution, RoboMonX enables an incremental and event-driven update of the test plan in Xray. Each single test case is submitted to Xray immediately after execution, providing a real-time status of test progress and results in the test management system.

RoboMonX addresses the limitations of traditional integration approaches and offers significant benefits in terms of transparency, responsiveness and efficiency of the test process. The early detection of deviations and the continuous availability of up-to-date test results provide support for informed decision making in the development process.

Problem definition: Inefficient test reporting and delayed feedback of automated test case results, making it difficult to respond to defects in a timely manner.

Our approach: Development of a customized integration solution between Robot Framework and Xray using event-driven mechanisms for real-time transmission of test results.

Results: Increased transparency and dynamic visualization of test progress in Xray. Early detection of inefficiencies and potential risks in the test process. Improved decision making through timely availability of test results. Potential for increased efficiency and quality in software development.

Target Audience: The talk is aimed at professionals in the field of software development and quality assurance, test automation experts, developers QA Leads and product owner dealing with current test management challenges and solutions.

During the talk we will present RoboMonX and the results achieved but also discuss the potential for future developments.

Ivaylo Brüssow
Ivaylo Brüssow (Ivo) has been working in the field of test automation in various industries for more than 14 years. For the past 5 years, he has been working at Provinzial Versicherung AG as the technology manager for test automation. There, he works in the competence center for test management and test automation, which is responsible for the provision and further development of the central automation framework, workflows, and libraries, as well as students support. His focus is on test automation, test design, workflows, and integration, as well as consulting and support for test automation projects at Provinzial AG. Ivo organises a corporate and local community and is also a proud ambassador for Robot Framework since the programme began.
Andrej Nod
“Yes we can do that with Robot Framework!!!” -The Art of Convincing Leaders to use Robot Framework
🔗
Mar 05, 06:00 PM (UTC) | 30 min
By Rohith Ram Prabakaran

In today’s fast-paced tech landscape, we work with a wide range of tools and technologies. However, convincing non-technical stakeholders, such as business users and leadership teamsto adopt a particular tool or framework can often be challenging.

In this talk, we will explore ideas, proven techniques and practical strategies to effectively communicate and convince people to use Robot Framework, build stakeholder confidence, and drive organizational adoption.

As a Technical Pre-Sales Professional and Advisory Automation Solution Architect, I often work with global clients on automation proposals and consulting engagements. Provided the right fit, it’s relatively easy to convince technical teams to use Robot Framework, but the real challenge lies in influencing leadership and business stakeholders — who often make final decisions based on factors like cost, support, and ecosystem dependencies.

In this Talk we will go over a full length process of how we can understand an Automation ecosystem and how we can convince both Company and Client management in using Robot Framework.

In this talk, we’ll walk through a complete process for understanding an organization’s automation landscape and effectively positioning Robot Framework as the right choice — both technically and strategically.

We’ll explore key unique selling points (USPs) of the framework, including:

Ease of use Flexibility and adoptability Support for various tech stacks Extensive library ecosystem Availability of ready made libraries Space for Cusomization Vast community support Comparison with other Licensed and low code tools in the Market

This session aims to equip Robot Framework enthusiasts and practitioners with practical insights on how to make a compelling pitch into using Framework.

Rohith Ram Prabakaran
Break
Mar 05, 06:30 PM (UTC) | 30 min
KeyTA 2.0: The easiest way to use Robot Framework
🔗
Mar 05, 07:00 PM (UTC) | 30 min
By Marduk Bolanos

KeyTA is a web app that allows anybody to get started using Robot Framework. It does this by providing a simple user interface that combines the strengths of a REPL, a spreadsheet and a web browser. As a result, it augments both the Robot Framework DSL and the execution engine with new features: auto-looping over lists, execution of individual keywords, test execution starting from any step, and many more. This talk will provide a live demo using the Browser library showcasing the advantages of using KeyTA for web automation.

KeyTA is a simple web interface designed with the goal of allowing anybody to quickly get started using Robot Framework. It is optimized for user comfort and thus aims to provide a fast feedback loop. In particular, individual keywords can be directly executed and test cases can be resumed from the step that failed.

KeyTA was born out of the necessity to enable domain experts with no programming knowledge to leverage Robot Framework to automate processes and tests. They are used to working with graphical user interfaces (e.g. Excel, SAP) and they want to stay in this familiar environment when automating tasks they usually perform by hand.

KeyTA is being developed at NRW.Bank, the state development bank of the federal state of North Rhine-Westphalia in Germany, and a member of the Robot Framework Foundation. The core of the application was released by the bank as open-source software and imbus continues its development on GitHub.

This talk will provide a live demo that should serve as an introduction for new users. A short test case will be created from scratch using the Browser library. Along the way several features of KeyTA will be illustrated and the advantages of using it for web automation will become apparent.

Marduk Bolanos
Marduk is a Senior Software Quality Engineer at imbus and a perfectionist as defined by Saint-Exupéry. He is passionate about creating simple software that empowers people to use a computer as the tool they need it to be. Best known as the author of RoboSAPiens and KeyTA, he also enjoys giving talks and workshops.
From Flaky Chaos to Clear Signals: PyCharm's UI Test Observatory
🔗
Mar 05, 07:30 PM (UTC) | 30 min
By Denis Mashutin

PyCharm QA team stopped chasing green and switched to the "observability over stability" approach. This talk will share our workflows for monitoring trends and tell the story of creating the 100% vibe-coded, stateless solution that builds real time views from API requests, highlights similar failures, and draws attention to regressions.

Like many teams, we used to treat UI tests as something that must be green. For months after introducing them, the PyCharm QA team fought flakiness, managed mutes across environments, and tried to keep up with monorepo changes from hundreds of developers. We shifted to monitoring trends instead of day-to-day statuses and chose a bird’s-eye view of the system over inspecting single failures in a specific build or environment.

This talk shares our approach and the lightweight tool that enables it. The TestKeeper Service is a 100% vibe-coded solution with no FTE spent. Its stateless architecture builds views in real time from API requests, with no deployment or database maintenance. Instead of showing which tests failed, our service focuses on trends, highlights similar failures, and draws attention to cases where we should reproduce the failure manually.

Attendees will learn the following:

  • The workflows we developed to enable the observability approach and complement the tool: recognising typical patterns of trends, standard steps to reproduce the issue, and distinguishing problems in the product from defects in tests
  • Real cases from PyCharm: how we manage to spot and catch regressions against the background noise of flakiness
  • Guardrails that we use to balance extending the coverage and fixing defects in tests, in addition to our overall approach to developing new tests
  • How a stateless, zero-FTE, API-based service can deliver a significant impact, and how to apply a similar design in your context

The main goal of the talk is to provide evidence that observability over stability is a valid direction for developing a testing framework, especially for UI tests and complex systems. I want to show colleagues a better alternative to spending man-hours on fixing flaky tests, and how a vibe-coded internal tool became a game changer in the quality assurance infrastructure of PyCharm.

Denis Mashutin
Denis Mashutin is a Software Test Automation Engineer at JetBrains, responsible for PyCharm’s UI tests. A serial career switcher, he transitioned from Arabic technical translation in the Middle East into software development and has already tried out quite a few roles in IT: technical writer, documentation lead & DocOps engineer, QA engineer, test automation engineer. Along the way, he introduced docs‑as‑code practices, built automation frameworks, and developed tools to accelerate development workflows and ensure quality. He is now focused on building a robust, informative UI test infrastructure for PyCharm, with an emphasis on improving release quality and developer experience.
Break
Mar 05, 08:00 PM (UTC) | 30 min
Bringing Robot Framework to the Factory Floor: Production Automation for Embedded Systems
🔗
Mar 05, 08:30 PM (UTC) | 30 min
By Paweł Wiśniewski

Robot Framework is a powerful tool in development and QA—but its usefulness doesn't stop there. In this talk, I’ll demonstrate how we apply Robot Framework in a production environment to validate embedded hardware during manufacturing.

You'll see how we use Robot Framework to automate hardware validation during manufacturing—from the moment an assembled PCB arrives, through functional testing, to the final checks before shipping the product.

Paweł Wiśniewski
Paweł is involved in software engineering since many years. He was developing software for Microcontrollers and FPGAs. Besides software development he helps with improving software development processes, by incorporating engineering practices in day to day work. He was involved in creating continuous integration infrastructures, optimization of build processes, test automation and automation of repeatable tasks. At embeff he leads development of a next-generation test tool for microcontroller code.
Speed up test automation: 5 levels of caching
🔗
Mar 05, 09:00 PM (UTC) | 30 min
By Sander van Beek

The key to fast tests is to do fewer things. Reusing previously done work is a great way of doing fewer things without changing what your tests do. Learn about 5 levels of caching to speed up your test runs.

"Let me quickly fix that test before I log off for the day". Before you know it, it's 20:00, you're still running tests, you're really hungry for some inexplicable reason, you see the tests doing the same thing over and over again, you're ready to throw your laptop out of the window, if it would only open but even the window is being difficult (your phone is blowing up), the doorbell rings, and aâ̶̊ͅar̷̡̟͋̕͠g̵̣̰̫̉̆͠hh̸͖͙̃̈h!

Bad test performance is a universal annoyance. "Quickly" running some tests can take forever. But it can also be really hard to figure out how to speed things up. The result? Blankly staring at your screen, getting distracted, and annoyance slowly building up until you ~rage quit~ give up for the day.

To rid myself of this frustration, I make my tests faster. Fundamentally, there are only 2 core principles to speeding up your tests without impacting their contents:

  • Do things simultaneously — Maximize CPU usage
  • Do fewer things — Reduce CPU time

Caching is a way of doing fewer things. In Robot Framework, there are 5 levels of caching:

  1. Test variable Store a value and reset it when the test finishes.
  2. Suite variable Store a value and reset it when the test suite finishes.
  3. Global variable Store a value and reset it when the test run finishes.
  4. Pabot variable Store a value, share it with parallel test runners, and reset it when all tests finish.
  5. Cache file Store a value, share it with parallel test runners, share it with the future test runs, and reset when the expiration time has passed.
Sander van Beek
I'm a technical tester that focuses on automation. To me, a test that's worth doing is also worth automating. I combine my technical expertise with the human and organizational sides of testing. Technical solutions are great, but the bigger challenge is making people use them at scale. I think it's fun to create something new. I think complexity is fun and more complexity is more fun. I use my programming skills to tame complexity.
Break
Mar 05, 09:30 PM (UTC) | 30 min
Moving away from global resource files by utilizing AI: a case study
🔗
Mar 05, 10:00 PM (UTC) | 30 min
By Silken Kleer

Do you use an "import everything" file throughout your codebase? Do you encounter maintenance headaches as a result? Do you have good intentions of addressing this but are having trouble making it a priority? This talk moves beyond the theory on why these files are an anti-pattern, and provides strategies and insights from a real-world example of eliminating these global resource files. By leveraging AI, we can reduce the grunt work involved and make this previously overwhelming refactoring challenge much more achievable.

Drawing from a real refactoring project, this talk provides concrete techniques for breaking up global resource files with the assistance of AI.

Topics covered:
  • How we got here: Background on why codebases often implement this practice.
  • Motivation: Why reducing reliance on global resource files is desired.
  • AI memory simulation: Track keyword and variable definitions to aid in import redistribution.
  • IDE integration: Combine diagnostic tools with AI to guide refactoring.
  • Context management: Handle AI limitations when working across many files.
  • Import cleanup: Detect and address unnecessary imports AI may introduce.
  • Practical validation: Balance thoroughness with practicality when checking AI output.

Statistics on the codebase size and complexity will be provided, helping participants assess how these approaches will scale to their own projects.

Most importantly, participants will be inspired to tackle similar work in their own codebases.

Silken Kleer
Silken is a Software Tester at SMART Technologies where she has worked on web and mobile automation. She contributes to CI/CD infrastructure and coordinates cross-team automation initiatives. She has experience working with Robot Framework listeners, parsing test results, and customizing reports. She also investigates AI-assisted automation tools, using Claude Code for tasks like multi-repository refactoring and workflow improvements.
Robot Framework RPA and AI Agents: A Powerful Combination
🔗
Mar 05, 10:30 PM (UTC) | 30 min
By Joshua Gorospe

The field of automation is constantly evolving. Currently there is a common misconception that LLMs and AI agents are only useful in the vibecoding trend. That trend has captured the attention of the tech industry since the start of 2025 and has sparked a lot of discussions on several social media platforms. Also there are at least 60+ agent projects and 4000+ MCP projects tracked in pulse.com today. My talk will demonstrate how combining Robot Framework's ecosystem with locally running AI Agents and LLMs can lead to a powerful combination.

This presentation will give a high-level walkthrough and demonstration of how Robot Framework RPA can combine with local AI Agents, MCP, and various LLM types to enhance their capabilities. This talk will discuss the following main topics below.

  • Brief introduction to the open source AI Agent, LLM, and MCP ecosystem landscape.
  • Overview of Codename Goose (https://block.github.io/goose/), an open source AI Agent framework project developed by Block (Jack Dorsey's company).
  • Talk about how Ollama (https://ollama.com/) can be used to setup a locally running private LLM instance on your own hardware.
  • Walkthrough/demonstration of the basic design of using Robot Framework RPA to automate sequential tasks with Codename Goose on a local LLM that can run on anyone's hardware.
  • Talk about the Codename Goose Docker container and give an overview of situations where some models are too big and demanding for anyone's personal hardware.
  • Walkthrough of the basic design and building-blocks of using Robot Framework RPA to automate parallel tasks with parallel running Codename Goose Docker containers connected a cloud AI product such as Google Gemini.

This is the public GitHub repo containing all of the automation demonstrations mentioned above. https://github.com/jg8481/Robot-Framework-AI-Agent-Datadriver

Joshua Gorospe
Joshua Gorospe is a Staff Test Engineer at webAI with roughly 20 years experience in the tech industry helping fellow testers craft test strategies for various companies and products. Very interested in Web3, blockchain, AI agents, model based testing, James Bach's Rapid Software Testing methodology, and combining them with Robot Framework RPA. Fun fact, he was combining Robot Framework with conversational AI chatbots years before ChatGPT became famous. Joshua uses that experience to create and maintain Robot Framework community projects in GitHub, do Robot Framework presentations or workshops at conferences, writing Robot Framework articles on medium.com, and continue writing his book.
RoboCon Closing
🔗
Mar 05, 11:00 PM (UTC) | 10 min
By Ed Manlove

Ed Manlove will close the conference with brief closing remarks and a big thank-you.

Ed Manlove will close the conference with brief closing remarks and a big thank-you.

Ed Manlove
Ed Manlove is lead maintainer of the SeleniumLibrary and a long time member of the Robot Framework community. He helps build up the community; working throughout the ecosystem to connect projects, people and organization. You can see his contributions and bio on his [Github profile](https://github.com/emanlove).

Tutorials & Community Day

Community Day EMEA (Free)
🔗
Mar 06, 07:00 AM (UTC) | 4 hrs
By Miikka Solmela

This is your day to set the agenda. Community Day is a free “unconference,” where attendees propose and vote on topics at the start. It’s a vibrant, hands-on space for sharing ideas, learning, and getting direct help from experts in the ecosystem. Everyone is welcome!

-> Join RoboCon Space

Miikka Solmela
AI-Aided Software Development – Becoming an AI-Ready Engineer
🔗
Mar 06, 11:00 AM (UTC) | 2 hrs
By Ismo Aro

-> Join Live Stream <-

Can one engineer design, implement, and validate a full feature in real time with AI? In this hands-on session, you’ll see exactly how: prompt an AI coding agent, drive development with Robot Framework tests, and ship a Like feature across backend + frontend on local machines. No hype, no black box, just practical patterns for becoming an AI-ready engineer who moves faster while keeping quality under control.

AI coding tools are everywhere, but most teams still struggle with the same question: how do we use them for real engineering work without losing quality, control, or trust?

This session is a practical, hands-on walkthrough of an AI-aided development workflow that engineers can actually apply on Monday. We’ll use a local fullstack project and build a real feature together while keeping quality gates in place from start to finish.

What you’ll see

  • How to frame prompts so AI produces useful, reviewable code
  • How to run test-first development with Robot Framework as the safety net
  • How to implement a feature incrementally across backend and frontend
  • How to validate each step instead of “hoping” generated code is correct
  • How to keep human engineering judgment at the center of the process

Core idea

AI does not replace engineering discipline. It amplifies disciplined engineers. In this talk, AI is treated as a coding collaborator: fast, helpful, and fallible. Robot Framework tests are the contract that keeps implementation honest. Together, they create a workflow where speed and quality reinforce each other instead of competing.

Why this matters

Many teams experiment with AI coding but get stuck in one of two extremes:

  • blind trust (“it compiles, ship it”)
  • total skepticism (“AI output is unusable”)

We’ll show a middle path: high-velocity delivery with explicit quality controls.

Ismo Aro
Ismo Aro is a partner and CTO at NorthCode. His professional focus is in modernizing the ways of how companies are working.
Tutorial on Automation with Image Recognition Libraries - SikuliLibrary (and ImageHorizonLibrary)
🔗
Mar 06, 01:00 PM (UTC) | 2 hrs
By Hélio Guilherme

!! Canceled due to technical issues ... 🥺

This tutorial, is about using image recognition libraries to automate tasks or testing, when it is costly or difficult to obtain object identifiers in the applications under test. We will use the libraries SikuliLibrary and ImageHorizonLibrary to automate applications we do not know about their internal components. This is what is called Black Box Testing.

We will practice automating a Login in a VMWare/VirtualBox Windows system and then doing some actions.

Contents:

  • About Image Recognition Libraries - SikuliLibrary (and ImageHorizonLibrary)
  • Knowing the Java based SikuliX IDE and its possibility to run Robot Framework test cases.
  • SikuliLibrary: -- Installation -- Planning the Test Suites file structure -- Defining Test Cases and Resources -- Running Test Suites
  • Combining SikuliLibrary keywords with ImageHorizonLibrary
  • Practice in automating a Login in a VMWare/VirtualBox Windows system and then doing some actions.

About: Image recognition libraries are used to automate tasks or testing, when it is costly or difficult to obtain object identifiers in the applications under test. These libraries use Computer Vision (OpenCV) to match reference images with a copy of the computer screen, and also Optical Character Recognition (OCR) for text extraction. With these techniques and operation system actions like mouse movement and keyboard strokes, the system can replicate the actions of the human user.

SikuliLibrary is a Robot Framework library that allows to use the SikuliX Java API. It uses Robot Framework Remote to interface Python functions with the SikuliX Java libraries, so it needs to have Java Runtime Environment installed in your system. -- diagram from project: https://github.com/MarketSquare/robotframework-SikuliLibrary/blob/master/docs/img/architecture.png -- The usual workflow for a Test Case or Task is:

  • Import SikuliLibrary and start its server
  • Define the location for the reference images
  • Start the Application Under Test (AUT)
  • Interact with the AUT by actions of mouse, keyboard, matching of reference images on the screen, and Optical Character Recognition (OCR) for text extraction.
  • Complete the workflow by stopping the server.

SikuliX IDE:

  • Installation SikuliX IDE, which requires Java
  • Creating and Running a Test Case with SikuliX IDE

SikuliLibrary:

  • Installation
  • Planning the Test Suites file structure
  • Defining Test Cases and Resources
  • Running Test Suites

ImageHorizonLibrary is a Robot Framework library, based on pyautogui and other Python modules, and optionally opencv-python for adjusting the image recognition precision. This library does not have Optical Character Recognition (OCR) keywords. Similarly to SikuliLibrary, it uses reference images to interact with the AUT on the screen. We can say that the usual workflow is the same as the one with SikuliLibrary, except for the server and OCR parts.

Combining SikuliLibrary keywords with ImageHorizonLibrary: -- Installation of ImageHorizonLibrary -- Adjusting Test Suites to use SikuliLibrary and ImageHorizonLibrary simultaneously (conflicting keyword names) -- Running Test Suites

Practice in automating a Login in a VMWare/VirtualBox Windows system and then doing some actions.

Hélio Guilherme
Hélio Guilherme is an experienced Software Tester since 2008 when he first had contact with Robot Framework at Nokia Networks in Lisbon, Portugal. During his work activities he used all the Robot Framework internal libraries, and other libraries like: SikuliLibrary, SSHLibrary, SeleniumLibrary, SwingLibrary, Browser, RequestsLibrary and AppiumLibrary. He is currently the lead developer and maintainer of RIDE (https://github.com/robotframework/RIDE/) and maintainer of SikuliLibrary (https://marketsquare.github.io/robotframework-SikuliLibrary/). He says he does not know if "he is a Software Tester who likes to do Software Development, or a Software Developer who likes to Software Testing". Professionally, he is DevOps and QA Engineer at LOAD in Aveiro, Portugal (https://load.digital/).
Community Day Americas (Free)
🔗
Mar 06, 04:00 PM (UTC) | 6 hrs
By Ed Manlove

This is your day to set the agenda. Community Day is a free “unconference,” where attendees propose and vote on topics at the start. It’s a vibrant, hands-on space for sharing ideas, learning, and getting direct help from experts in the ecosystem. Everyone is welcome!

-> Join RoboCon Space

Ed Manlove
Ed Manlove is lead maintainer of the SeleniumLibrary and a long time member of the Robot Framework community. He helps build up the community; working throughout the ecosystem to connect projects, people and organization. You can see his contributions and bio on his [Github profile](https://github.com/emanlove).

Community Day

This is your day to set the agenda. Community Day is a free “unconference,” where attendees propose and vote on topics at the start. It’s a vibrant, hands-on space for sharing ideas, learning, and getting direct help from experts in the ecosystem.

To cover time zones, we’ll host two sessions:

  • EMEA Community Day – ~4h

  • Americas Community Day – ~4h

Both take place in Gather.Town, our interactive online world where you join with your avatar, meet others, and keep discussions flowing in a fun, spontaneous way.

Tutorials

These tutorials are complimentary for all ticket holders and will take place between the Community Days, starting at 12:00 CET. You’re welcome to drop in and out as needed, but you’ll get the most value—and a complete learning experience—by staying for the full session.

Watch Parties

This year, we’re introducing a special way to gather locally in Watch Parties and enjoy the RoboCon talks together. Your host will also arrange additional program to make the most of the day. Depending on the setup, this might include a hands-on workshop, a tutorial, or just casual drinks and food. Some parties may run in the morning or evening, before or after the main program.

We’ll publish the list of companies hosting Watch Parties closer to the event. If you’d like to host one, get in touch!