From Code to Compliance – Part2: Lessons Learned

Tobias Deiminger, Florian Kauer | 10.06.25 | general

This is the second post from a two-part blog series. In part 1  we talked about the relation of IEC 62443 to the CRA, and how we approached technical documentation. The second part highlights some lessons learned from implementing the standard that we think are worth sharing.

Reading IEC 62443 as a software developer might leave you with some question marks. It's common to read standards carefully and repeatedly until they make sense, but some of these question marks just refuse to go away. IEC definitely did a good job to cover security aspects from the OT world in breadth. Issues probably come from:

  • There are almost no public success stories on the internet or in journals that give insight in how others interpreted and mapped requirements from IEC 62443 to executable code. Hopefully this blog contributes a little bit on improving the situation.
  • The standard is written for products that get readily deployed into a known OT system. The less you know about asset owners systems and workflows, the harder it becomes to decide for an implementation of a particular requirement. The model really doesn't fit well for generic software like libraries or the operating system core.
  • The standard uses a very generic language and rarely maps terms to existing technologies. This is on purpose to apply the same level of security to a wide range of technologies.
  • The standard sometimes refers to commonly accepted practices and guidelines. In 8 of 9 cases they give at least exemplary references (“may include the NIST SP 800-92”, “guidelines such as OWASP”, “such as SIEM”, ...), but in no case there’s an authoritative set of practices.

To sort this out, we put the generic IGLOS software stack in scope of IEC 62443-4-2 by developing an OT application “Secure Beacon” on top. We investigated in journals, other standards and community guidelines for the current state of the art. Proposals were continuously discussed with TÜV SÜD. Here's a rather arbitrary list of lessons learned.

Lessons learned

Lesson 1: IEC 62443-4-2 is neither a guideline nor a hardening catalogue. Such catalogues exist and would tell you, for example, what to write into /etc/ssh/sshhd_config [1]. IEC 62443-4-2 on the other hand is far from that level of detail. You can't just walk through it and implement the requirements one by one. Their requirements will rather have a many to many relation to product requirements and only the product requirements let you determine solutions. In consequence, standard requirements are not directly testable – only product requirements are. See, for example, the Secure Boot feature of our implementation.

Secure Boot needs hardware support and begins with platform specific firmware. Since we built on an NXP iMX8 SoC, this means the first stage is HABv4 boot. It loads public keys from the program image and first verifies the keys with a SHA-256 hash the vendor (we) had previously burnt to hardware. The now-verified public keys are used to verify the next stage loader, which is U-Boot. U-Boot luckily can act as UEFI firmware. Its UEFI boot services use certificates from the UEFI signature database to authenticate the next stages EFI boot guard (for A/B redundancy) and the Linux kernel with EFI stub. The remaining secure boot stages can rely on the UEFI LoadImage secure boot service and become platform-independent. This allows reuse in future projects. As one of it's last actions, Secure Boot verifies the root hash of the dm-verity protected vendor partition. This is the critical link that makes vendor partition where most application program data resides tamper resistant.

Now how does this relate to 62443 4-2?

  • CR 1.5 Authenticator management is related to Secure Boot, since Secure Boot helps protecting integrity and authenticity of vendor provided MAC profiles (e.g. AppArmor, D-Bus), which in turn enforce access control to authenticator data in the file system.
  • CR 1.8 and CR 1.9 Public key infrastructure certificates, Strength of public key-based authentication are related to Secure Boot, since HABv4 and UEFI rely on certificates and signatures.
  • CR 3.2 Protection from malicious code is related, since Secure Boot prevents execution of unauthenticated code during boot stage. It's a valid assumption that software authenticated by the vendor is not inherently malicious if a secure software development process is followed.
  • CR 3.4 Software and information integrity is related since Secure Boot protects integrity and authenticity of bootloader, kernel and data on the read only vendor partition.
  • CR 3.6 Deterministic output, since Secure Boot can fail and thus we need to define the deterministic output in case failure.
  • CR 3.12 Provisioning product supplier roots of trust, since the SHA-256 value burnt to OTP for HABv4 is a supplier provided root of trust.
  • CR 3.14 and 3.14 (1) Integrity and Authenticity of the boot process. Here's where the standard actually requests the use of secure boot.

Vice versa, the single CR 1.9 standard requirement mapped also to different product features like passkey login, software update or the embedded web server. Now we have a many to many relation. An excellent way to describe such relations is a compliance matrix [2]. To really benefit from it, we had to transform the IEC 62443-4-2 requirements into StrictDoc syntax. An export of all compliance matrix statements was eventually handed to the certification body as the primary evidence.

Lesson 2: There’s no single blessed solution for how to solve a standard requirement. In fact solutions can technically be very different, yet compliant.

Take User Management as an example. Some applications have their specific user database, say a local sqlite database or text file. They don't use /etc/passwd. In this case, permissions are enforced by the application, not the operating system. Some want to integrate centrally managed users through say LDAP and sssd. Some applications want to reuse local operating system users. All the above could be implemented in a IEC 62443-4-2 compliant way. The standard doesn't care *how* you enforce access control. Our proposed reference implementation fits well for embedded systems. A common observation is however that embedded devices do usually not expose display managers like sddm or shell logins. Instead they have restricted user interfaces constrained to the devices use case. With that in mind, our reference application is implemented based on

  • A web UI user that is a Linux user (i.e. it maps to a uid and gid). Linux backend sessions are internally spawned and reused by systemd for that given user during login at the web interface. The benefit from this approach is that OS policy and access control mechanism can be reused, and system connectors like sssd could be plugged in if required.
  • The user can authenticate passwordless with FIDO2 passkeys, which means the traditional UNIX user database is not enough. We have to augment it with webAuthn-related data.
  • IEC 62443-4-2 doesn't mention passkeys explicitly. They either talk about "authenticators" in general or have requirements for passwords specifically (e.g. CR 1.7 Strength of password-based authentication).
  • There's application-specific role-based access. Roles are implemented by mapping them to Linux groups and D-Bus policies. Our reference application defines a fixed set of available roles "Admin", "Operator", "Observer", which in turn are associated with a fixed set of mostly non-overlapping permissions.


Lesson 3
: Many requirements are best addressed by integrating into another system. This means to implement only adapters or proxies that use external services, rather than implementing the full functionality locally. As a precondition, the presence of the external service must be stated in the requirements document. Such requirements can go into a section “Assumptions”. Last but not least it is needed to explain in the user documentation how a system integrator has to perform the configuration to provide the security capability.

An example in “Secure Beacon” is audit logging. In particular, we meet CR 2.9 Audit storage capacity, CR 3.9 Protection of audit information and CR 6.1 Audit log accessibility by integration as follows.

Linux provides the audit framework, and the operating system produces lots of very detailed audit events in a format compatible with what's required for IEC 62443 4-2. There's also libaudit which can be used to log custom events from a user space application. Usually the records would be sent from kernel through netlink to auditd and get written to disk. However for embedded devices this means complexity because it implies reserved writable storage and a builtin audit reporting UI. We can instead just forward audit events to a remote server and not store, process or filter them locally. Auditd provides such a forwarding feature out of the box through the audisp-remote plugin. It supports Kerberos 5 for authentication and encryption, which fits well for CR 3.9 and CR 4.1 (protection of confidentiality in transit). Finally we write an assumption to the requirements specification that essentially says we expect that a customer provides a remote Linux server to receive, process, and filter our forwarded audit events. With this, the requirement is met by integration.

 

Lesson 4: One has to take care which identifiers are used inside the SBOM to name a component. CVE scanners may miss vulnerabilities (or do not find the most accurate ones) just because of name mismatch. An arbitrary example: The same component is named flask-cors in PyPI, corydolphin/flask-cors at cve.org, python-flask-cors in the Debian source package and python3-flask-cors in the Debian binary package. The potential for mismatch should be obvious.

Many scanners use Google’s OSV as data provider. For Debian, OSV contains entries identified by the Debian source package name [3] - not the binary package name, and not the upstream name. 

Try yourself. This will find CVEs by source package name

curl -d '{"package": {"name": "python-flask-cors", "ecosystem": "Debian"}}' "https://api.osv.dev/v1/query"

but trying the same with the binary package name won’t yield any CVEs

curl -d '{"package": {"name": "python3-flask-cors", "ecosystem": "Debian"}}' "https://api.osv.dev/v1/query"

That’s because OSV imports Debian Security Advisories to populate “ecosystem = Debian” entries, which in turn are based on source package names. Therefore, software identifiers such as PURLs [4] or CPEs [5] are needed for unique matches.

 

Lesson 5: Products are usually compiled from various external software components. In our case, this includes for example the Linux kernel, various core libraries and tools (like glibc) and various components to fulfill the product functionality (like the apache2 webserver). For IEC 62443 it is not  necessary to have all these software components themselves explicitly certified. However, it is important to “ensure that supply chain security is addressed for equivalent security updates, security deployment guides and the supplier’s ability to respond if a vulnerability is discovered.” (Quote from SM-9 of IEC 62443 4-1).

On the one hand, fulfilling this can seem more difficult for open-source software compared to closed-source software, since most open-source projects won’t provide any contractually binding guarantees about providing timely responses to security vulnerabilities. On the other hand, at least renowned open-source projects like the Linux kernel or OpenSSH actually have an elaborate and well-trained process for product security, often exceeding those of commercial providers. For some popular projects professional security audits are available publicly [6]. And since the code is available, the product supplier always has the chance to fix a potential vulnerability themselves or even step in as maintainer. This is not an option for closed-source software.

But, of course, for the huge number of open-source projects out in the wild, including even lots of small student projects, proper maintenance is not a given. Therefore, for all external software components that are integrated into a product, an assessment is paramount. A first indication can be the availability of a software package in Debian stable. Debian has established policies to handle security problems and regularly publishes security advisories. Of course, this first indication still needs to be supported by deeper analysis for example by following the “Concise Guide for Evaluating Open Source Software” [7] by the Open Source Security Foundation (OpenSSF), including checks such as “Are there recent releases or announcements from its maintainer(s)?” or “When reviewing its source code, is there evidence in the code that the developers were trying to develop secure software (such as rigorous input validation of untrusted input and the use of parameterized statements)?”.

The decision of including a given software component always needs to be made considering the overall risk for the product and its usage. Security-critical components, such as a firewall, call for more rigor than internal helper libraries that never process unvalidated input from outside.

Still, the external components management is a time-consuming process. Sharing this work and distributing it over various different products is one of the core pillars of IGLOS. By using IGLOS, one can reuse the management of already-included components, while sponsoring to increase the overall pool of assessed software components.

Summary

We showed that having a reference application helps a lot to put a generic software stack into the scope of IEC 62443-4-2. We discussed how working with the standard is fundamentally different compared to working with a hardening catalogue and that it leaves developers with a lot of freedom in the solution space. As we have demonstrated, developing a IEC 62443 4-2 certified component doesn’t require proprietary software to implement the security requirements. Quite the contrary: Since systems based on Linux and the Debian ecosystem are widely deployed in the core of the Internet for decades and are under constant attack, a whole bunch of battle-tested software (as for example the Linux kernel or OpenSSH) is available that often even surpasses the requirements of the IEC 62443 4-2 from a technological standpoint. Our Industrial Grade Linux Operating System (IGLOS) provides a secure baseline for Linux-based cyber-resilient systems. With this, the users of IGLOS can focus on their own application while leveraging IGLOS for their own security needs and the fulfillment of the cyber-resilience act.

 

Authors:

Tobias Deiminger is a software engineer with background in communication systems at Linutronix and contributor to several open-source projects. He recently worked on implementing the IEC 62443 security standard at Linutronix.

Florian Kauer is a software engineer specialized in embedded systems for reliable and secure networks. During his time in academia, he dealt with low-latency audio transmission and scalability of wireless industrial control networks. After developing cloud systems for life-science applications, he joined Linutronix to develop open-source networking and security solutions.

 

References:

[1] https://stigviewer.com/stigs/canonical_ubuntu_22.04_lts/2024-11-25/finding/V-260526

[2] https://strictdoc.readthedocs.io/en/stable/stable/docs/strictdoc_01_user_guide.html#SDOC_UG_GRAMMAR_RELATIONS_PARENT_VS_CHILD

[3] https://google.github.io/osv.dev/data/#converted-data

[4] https://github.com/package-url/purl-spec

[5] https://nvd.nist.gov/products/cpe

[6] https://github.com/ossf/security-reviews/blob/main/Overview.md

[7] https://best.openssf.org/Concise-Guide-for-Evaluating-Open-Source-Software.html