From speeding up documentation and development to rethinking how requirements are built, AI is now part of our day-to-day workflows and real-world projects delivered to clients. And the results? Tangible. Measurable. Exciting.
These results aren’t projections or assumptions. They’re measured outcomes from real projects, real teams, and real codebases.
80%
less time spent on generating and maintaining public class documentation (measured with Cursor)
50%
faster creation of structured technical documentation for complex systems
30%
less effort to reach up to 100% unit test coverage, depending on the codebase
20%
faster identification and resolution of code violations through integrated tooling
15–20%
time savings for junior developers using AI tools is the boost in velocity we’ve seen in real, controlled experiments across teams
How AI has improved our software development
We’ve moved beyond testing. AI is part of how we deliver real value. These results come straight from our projects, where carefully implemented AI tools are making a measurable difference every day.
fiber_new
Reduced time to onboard developers into new domains
especially in regulated industries (thanks to AI knowledge assistants like L.E.A.P.)
format_list_bulleted
Higher quality requirements via AI-supported decomposition
using EARS syntax, reducing ambiguity and review time
people_outline
Improved cross-team alignment
by capturing and reusing internal know-how via Retrieval-Augmented Generation systems
bug_report
Fewer regression bugs and faster root-cause analysis
powered by AI-guided traceability and test generation
pattern
Documented AI usage patterns and tool performance tracking
ensure continuous feedback and safety in deployment
security
Safe integrations with our AI implementations
designed to align with ASPICE, ISO 26262, and other safety and security standards
Hear from our expert
As a QA Lead, I integrated ChatGPT into my daily workflow to support tasks like UML analysis, test case generation, and API test scripting (e.g., Postman, pytest). It significantly accelerated the creation of edge case scenarios and helped convert user stories into Gherkin format for BDD. I also used it to validate JSON structures, prepare test data, and generate technical documentation drafts. The model reduced repetitive manual work and improved consistency across test artifacts. For any QA working in fast-paced environments, it’s a highly effective tool.
Marcin Sikorski
Lead QA automation Software Engineer
Implemented, observed and measured:
Insights you can put into practice
Across teams and domains, we test, tweak, and track how AI changes the way we work. Here are real insights from real projects.
Through practical trials and developer feedback, we’ve learned:
AI is most valuable in repeatable, pattern-based tasks (e.g. documentation, interface scaffolding, boilerplate code)
AI is less helpful for abstract conceptual tasks unless guided by well-prepared prompts
Time savings are more likely in integration, testing, and automation tasks than in creative architecture design
With this clarity, we focus on adding real value instead of layering on complexity just for the sake of it.
We ran hands-on experiments to see how AI really impacts developer speed. To make it measurable, we tested tools in real workflows. Here’s what works for us:
Side-by-side comparisons of AI-assisted vs. non-assisted teams on real project tasks
Use of Scrum metrics like team velocity and completion time for epics/stories
Analysis of AI effectiveness across seniority levels (Junior, Regular, Senior) and technology familiarity
Task completion time
Code quality
Team velocity
Developer satisfaction
We’ve seen up to 20% faster development, especially when building early-stage prototypes or jumping into new tech stacks.
We built and tested our own audit framework to see where AI really fits. It helps us evaluate AI’s impact in real projects, so we’re not guessing, but making informed decisions.
This goes beyond hype-driven adoption. It’s a structured evaluation of where AI provides value, with metrics like delivery speed, code quality, and how ready the team is to adopt new tools.
We don’t silo innovation. Every AI success becomes a shared asset across the organisation. When an insight or workflow works in one domain, we don’t let it stop there: we scale it.
We applied unified AI tooling, adoption standards, and documentation best practices across:
Automotive: High-assurance coding environments using GitHub Copilot and Cursor for quality-critical software, fully aligned with ASPICE and ISO 26262
Robotics: Applied LLM-powered requirement decomposition, mocking frameworks (like rtest), and accelerated CI/CD workflows in ROS 2 projects
Industrial automation: Used AI-supported documentation and test acceleration for PLC and embedded C/C++ code in production lines
Healthcare (early-stage): Piloted AI-assisted architecture design and requirement generation under IEC 62304 and ISO 13485 constraints
To make AI-assisted development truly useful, we’ve developed internal tools and frameworks that bring clarity, consistency, and measurable impact across teams.
They help us track what’s working, improve what’s not, and keep pushing boundaries.
Prompt libraries for common development scenarios
We’ve built prompt libraries that actually work.
From generating unit tests in embedded C++, to refactoring old Java code, to translating safety requirements into EARS syntax, we have a growing collection of prompts tailored to real engineering challenges.
They’re versioned, domain-specific, and help our teams get faster, more consistent results with AI tools.
CI/CD integrations that track AI use against delivery metrics
We’ve embedded telemetry right into our pipelines to track exactly how AI tools impact story point burn-down, test coverage, and overall team velocity. These insights show us clearly where AI is making a real difference — and where it’s not quite hitting the mark.
Engineering leads gain instant, real-time visibility into the ROI of AI adoption, helping them steer their teams smarter and faster.
AI response quality analysis and prompt refinement tools
We’ve developed handy tools to assess the quality of AI-generated outputs, checking accuracy, coverage, and formatting against what we expect. These utilities can even suggest improved prompts and create alternative versions for easy comparison.
That means prompt engineering stops being a guessing game and turns into a smooth, repeatable process.
AI usage dashboards for team leads
Our AI usage dashboards provide engineering and project leads with insights into how and where AI is being used. They show tool adoption rates, output quality trends, developer feedback, and correlations with sprint performance.
Leaders can govern AI usage across distributed teams without micromanaging.
Our battle-tested solutions
From dev work to documentation and support, we’ve pinpointed the spots where AI actually moves the needle and we’re putting that knowledge to work across real projects.
Accelerated code documentation and test coverage
AI-supported requirements engineering
Customer support
AI knowledge management
Accelerated code documentation and test coverage
We integrate tools like GitHub Copilot and Cursor to:
Reduce time spent on public class documentation by up to 80%
Cut effort to reach up to 100% unit test coverage by 30%, where applicable
Identify and fix code violations 20% earlier in the development process
AI-supported requirements engineering
Another powerful way we’re using AI is with LLMs to break down requirements using EARS syntax, boosting:
Clarity of stakeholder/system requirements
Detection of underspecified or corner-case behaviour
Compliance with safety-related design standards (e.g. IEC 61508, ASPICE)
Efficiency of reviews and approval workflows
This includes hands-on validation in client automotive projects and internal quality checks for completeness and consistency
Customer support
At Spyrosoft, we put AI to work in customer support – thoughtfully integrated to cut response times, reduce operational costs, and uncover fresh efficiencies.
Automate ticketing and case classification
Accelerate troubleshooting and root cause analysis
Support faster, data-driven decision-making
Power smart knowledge bases, chatbots, voicebots, and AI agents
AI knowledge management
We built and deployed domain-specific Retrieval-Augmented Generation (RAG) systems for:
• Fast, secure access to internal documentation, specs, and code.
• Onboarding, debugging, reverse engineering, and system design.
• Integration with CI/CD pipelines and developer tools.
This provides engineers with instant, contextual answers from internal sources. No external data exposure, no retraining required.
We’re not just implementing AI — we’re also creating it.
arrow_circle_right
Qt AI Assistant
arrow_circle_right Qt AI Assistant
Co-created with Qt, this Copilot-like tool for QML/Qt is transforming how developers code.
arrow_circle_right
XGenie
arrow_circle_right XGenie
A domain-specific AI co-pilot we developed for the insurance industry to support underwriting.
arrow_circle_right
CodeLlama-13B-QML
arrow_circle_right CodeLlama-13B-QML
We open-sourced a custom fine-tuned LLM with Qt on Hugging Face.
Have a look at some use cases of AI-assisted development in actual commercial and R&D environments.
Automotive & embedded
Safety systems
Knowledge management
Engineering enablement
AI impact
Safety-critical
DevSecOps
Regression tests
R&D
Automotive and embedded software projects
Integrated Cursor into workflows to automate documentation, detect violations, support 100% unit test coverage, align with ASPICE & ISO 26262.
Automotive safety systems & spec writing
LLMs used to transform unclear requirements using EARS, detect ambiguities and invalid transitions, support IEC 61508 workflows.
Knowledge reuse & developer support
Central prompt repository for bug fixing, refactoring, test generation; tracked, versioned, and improved over time.
Internal engineering enablement & onboarding
AI-native Q&A engine (LEAP) with RAG, secure on-prem, reduces onboarding time and supports legacy system understanding.
Benchmarking AI impact in real projects
Controlled experiments show 5–20% faster development; tested on BLE mobile app and .NET enterprise systems.
Safety- and security-critical development
Custom wrappers for traceability, controlled I/O, on-prem use, no external APIs; used in embedded & defence-adjacent systems.
DevSecOps observability & AI governance
AI-generated code tracked across commits; integrated with GitLab, snapshots, Copilot metrics.
Regression test creation & bug detection
LLMs generate tests from signatures, summarise CI regressions, detect risky dependencies.
R&D projects & feature design
AI assists with unclear requirements and rapid prototyping, helping teams explore new tech.
AI support across the company
From developers and testers to product managers and CTOs, we’ve seen how AI speeds things up, improves quality, and makes everyday tasks smoother for every role on the team.
code
Developers
Code generation and refactoring, AI-assisted onboarding, Copilot prompt libraries, faster documentation, smoother onboarding via domain-specific RAG systems.
check
Testers
Unit test generation, regression test gap detection, anomaly detection, automated bug reproduction, AI-assisted test coverage analysis.
AI opportunity audits, identifying high-impact integration areas, monitoring AI usage dashboards, scaling AI tools across projects and domains, CI/CD-integrated velocity tracking, measuring AI’s ROI on delivery, workforce enablement, reducing cost-to-deliver across engineering teams.
support_agent
Support teams
Smart ticket classification and routing, 24/7 technical assistant agents, troubleshooting automation, AI-driven chatbots and voicebots tailored to client needs.
arrow_circle_rightContact us
Curious how AI could support your development process?
AI assisted software development is not a trend. It’s the next chapter of software engineering and it’s happening now.
Whether you’re looking to speed things up, cut costs, or simply get more out of your team, we’re ready to share what we’ve learned and help you make it real.