For years, cybersecurity in electronics design was treated as a “nice to have”. Everyone agreed it mattered. Many assumed it applied mainly to internet-connected products. And plenty of teams quietly hoped that if their device didn’t run Linux or talk to the cloud, it would stay out of scope.
The EU Cyber Resilience Act (CRA) changes that assumption for good.
It doesn’t introduce flashy new technology or radical security concepts. Instead, it draws a hard line between good engineering practice and legal obligation. If you design, manufacture, or place electronics containing software or firmware on the EU market, the CRA applies to you whether your product connects to the internet or not.
What is the Cyber Resilience Act?
The Cyber Resilience Act is a new EU regulation that introduces mandatory cybersecurity requirements for “products with digital elements”. It now sits alongside CE marking, EMC, and product safety legislation.
The principle is simple but far-reaching – if your product contains software and can communicate with something else, you are responsible for its cybersecurity.
The regulation entered into force in December 2024. Vulnerability reporting obligations begin in September 2026, with full compliance required from December 2027. From that point on, non-compliant products cannot legally be placed on the EU market.
What makes the CRA different is not what it asks manufacturers to do, but that it makes cybersecurity mandatory, evidence-based, and ongoing.
Why the CRA changes the rules
Cybersecurity guidance has existed for years. What’s been missing is accountability.
Before the CRA, security was often optional, bolted on late, poorly documented, or quietly ignored when issues appeared in the field. The CRA changes the question from “did you try?” to “can you show your reasoning?”
It doesn’t demand perfect security. It demands intentional, proportionate decisions that can be justified with evidence.
This is why supporting standards now matter. In particular, EN 18031-1 has become the practical reference for translating CRA requirements into real engineering decisions. If the standard feels impenetrable, we’ve translated it into plain English for engineers.
Which products are in scope?
One of the biggest misconceptions is that the CRA only applies to internet-connected devices. It doesn’t.
The CRA applies to products with digital elements that can communicate – directly or indirectly – with other devices or networks. That communication doesn’t have to be IP-based or cloud-connected.
Wi-Fi and Ethernet devices are obvious examples, but so are products using Bluetooth, sub-GHz radios (such as 868 MHz), proprietary RF links, mesh networks, commissioning tools, maintenance ports, or firmware update interfaces.
If your device listens, stores data, makes decisions, or updates its behaviour based on incoming information, it’s in scope.
“Not internet-connected” is no longer a defence
Attackers don’t care whether a protocol is proprietary or IP-based. They care whether they can inject messages, replay traffic, impersonate devices, extract secrets, or change behaviour.
That’s why EN 18031-1 focuses on interfaces, assets, and threat scenarios, not technologies. It gives teams a structured way to decide what applies, what doesn’t, and why. And crucially, to document those decisions in a way regulators understand.
Does every requirement apply to every product?
No, and this is where engineering judgement still matters.
The CRA is explicitly risk-based. Not every requirement applies to every product, but none can be ignored. Requirements are either applicable or justifiably not applicable and you must be able to explain which is which.
Declaring something irrelevant because it’s inconvenient won’t fly. Declaring it irrelevant because you’ve assessed the risk and documented the reasoning will.
This distinction is subtle, and it’s where many teams struggle when they first encounter EN 18031-1.
What the CRA actually expects in practice
In engineering terms, CRA expectations cluster around a few clear themes:
Reduced attack surface and secure defaults – Exposed debug interfaces, unnecessary services, and “temporary” features are much harder to justify.
Access control and authentication – If something can configure, control, or update your device, you need to be clear about who can do what and how that’s enforced.
Proportionate communication security – The CRA doesn’t require TLS everywhere, but it does expect integrity, authenticity, and replay protection where messages influence behaviour. “Proprietary” is not a security argument.
Protection of sensitive assets – Where keys, credentials, or sensitive configuration exist, the risks of extraction and cloning must be considered even if absolute protection isn’t possible.
Secure update mechanisms – If a product can be updated, authenticity, integrity, and recovery behaviour matter. If it can’t, that decision must be explicit and justified.
Ongoing responsibility – Cybersecurity doesn’t end at shipment. The CRA requires defined support periods, vulnerability handling, and reporting of actively exploited issues.
How compliance is judged
CRA compliance isn’t about a single test or a last-minute penetration scan. It’s about evidence.
Assessors will look for coherent documentation showing how risks were identified, how decisions were made, how requirements were addressed, and how behaviour was verified. The documented story must match the real product.
Again, this is where EN 18031-1 provides a shared language between engineers and assessors.
Why teams should care now, not in 2027
2027 feels far away. In reality, products starting development today may still be shipping then.
Architectural decisions around updates, key management, and interfaces are expensive to undo later. Documentation written after the fact is always weaker than documentation created during design.
The CRA doesn’t reward last-minute compliance. It rewards teams that treat cybersecurity like EMC or safety: engineered in early, considered explicitly, and verified systematically.
The Ignys perspective
At Ignys, we see the Cyber Resilience Act not as a burden, but as a long-overdue correction.
Good cybersecurity engineering looks a lot like good engineering full stop: understanding how your system behaves, making deliberate choices, reducing unnecessary risk early, and being able to explain why.
The CRA doesn’t demand perfection. It demands intent, evidence, and ownership.
And if EN 18031-1 feels daunting, that’s exactly why we’ve turned it into something engineers can actually use, without becoming standards experts overnight.
EN 18031-1: the Ignys idiot’s guide
Because compliance works best when it’s understood early not explained late.
Still unsure – contact us now.