Ethics, IHL, and Human Rights

engineer-check-and-control-automation-robot-arms-machine-in-intelligent-factory-industrial-.jpg

SEIUM

At SEIUM, technology is never neutral: it is designed, tested, and transferred within an explicit ethical, legal, and human rights framework. This policy sets out what we do, how we do it, and what we do not do when working in sensitive areas—including defence in an academic context, civil security, autonomy, and dual use—to ensure that teaching and research are non-operational, socially responsible, and compliant with International Humanitarian Law (IHL) and international human rights standards.

,

Purpose, scope, and institutional positioning

At SEIUM, we do not only teach engineering: we build it alongside industry and regulators, in laboratories that can be accredited and with real-world standards. Working here means

Ensure that all academic, research, and third-party engagement activities comply with International Humanitarian Law (IHL), human rights, applicable legislation, and internal policies; minimise the risk of misuse; and protect individuals, communities, and the public interest.

 
 

Students, faculty, technical staff, administrative staff, adjuncts, fellows, guests, contractors, and partners (companies, centres, and public authorities). Applies to content, data, software, equipment, testing, publications, and technology transfer.

 
 

We do not teach or develop operational tactics, doctrine, or TTPs (tactics, techniques, and procedures); we focus on engineering, safety, certification, ethics, and validation.

 
 

We exclude the design, optimisation, or instruction of offensive capability, lethality, weapons guidance, the exploitation of vulnerabilities outside legal frameworks, or the deliberate circumvention of regulatory safeguards.

The principles of distinction, proportionality, and precaution under International Humanitarian Law (IHL); the UN Guiding Principles on Business and Human Rights; OECD due diligence guidance; and applicable national and international regulations.

Offensive capability instruction, lethality, weapons guidance, the exploitation of vulnerabilities outside legal frameworks, or the deliberate circumvention of regulatory safeguards.

Guiding principles

First, do no harm:

Preventive assessment of human, social, and environmental risks.

Legality and legitimacy

Regulatory compliance plus explicit ethical justification (“not everything legal is legitimate”).

Human-in-the-loop

Meaningful human oversight in autonomy and decision-making systems.

Transparency and traceability

Documented decisions; auditable by internal committees and, where applicable, external bodies.

Privacy and dignity

mínimos de datos, privacy-by-design, protección de colectivos vulnerables.

Fairness and non-discrimination

Data minimisation, privacy by design, and protection of vulnerable groups.

Proportionality and minimisation

The minimum necessary for a legitimate academic/scientific purpose.

Accountability

Clear responsibilities, whistleblowing channels, and corrective measures.

Activity Classification

Ethical Review Workflow (Lifecycle)

Design/optimisation of lethal effects or weapons guidance.

Exploitation of vulnerabilities (cyber/OT) outside legally authorised programmes and without responsible coordination.

Re-identification of individuals, or use of sensitive datasets without a legal basis/consent and enhanced safeguards.

Deliberate circumvention of regulatory safeguards (safety/certification).

Live training or realistic simulation of operational tactics intended for use in conflict.

Red Lines (Non-negotiable)

Dual-Use and Autonomy Management

  • Capability reduction: publish principles and aggregated results, not critical parameters that could enable offensive uses.

     
     
    • Synthetic/anonymised data by default: If real-world data is used, require DPIA/HRIA, data minimisation, and data contracts.

     
     
  • Responsible autonomy: human-in-the-loop/on-the-loop, operating limits, fail-safes, safety cases, and assurance cases.

     
     
  • Defensive cybersecurity: focus on hardening, detection, response, and resilience; no offensive development.

     
     

Engagement with Partners (Contracts and MoUs)

Mandatory Training

Principles, borderline cases, warning signs, red flags.

 
 
 

DPIA, anonymisation, bias, explainability, and ARCO/DSAR rights.

 
 
 

Classification, counterparty screening, licensing, workflows.

 
 
 

Safe procedures, substances/equipment, incident response.

 

Best practice, secure-by-design, incident response.

 
 
 
  • Important: 100% of staff and students participating in sensitive projects must keep their training up to date; non-compliance results in suspension of access to resources and repositories.

 
 
 

Publication, Communication, and Open Science

Preference for openness where it does not compromise people, safety, or compliance.

 
 

Synthetic data, aggregated results, omission of critical parameters.

Purpose, limitations, intended use, out-of-scope uses, residual risks.

 
 

Avoid overclaiming; contextualise risks and safeguards.

 
 

SEIUM

Assume that technical excellence without responsibility is not excellence. For that reason, all teaching and research are governed by a robust framework for ethics, International Humanitarian Law (IHL), and human rights, with clear red lines, independent committees, auditable processes, and a “stop the line” culture. Our commitment is to educate and transfer engineering that is safe, lawful, and useful to society—from the road to orbit—without crossing the limits that protect life, dignity, and the rule of law.

Incidents, Reports, and Sanctions

Metrics and Transparency

Illustrative Cases (Practical Guide)

Operational Annexes (Available on the Internal Portal)

Scroll to Top