Safety Approach
Safety as Structural Constraint

The nature of safety risk in Fluxion’s operating context
Fluxion operates at the intersection of three environments where the consequences of safety failures are not abstract. At port terminals, autonomous cargo handling equipment operates alongside human workers in unstructured outdoor environments — equipment that weighs hundreds of tons, moves at speed, and cannot exercise the situational judgment that an experienced stevedore brings to a congested yard. In industrial fabrication, cobots and automated welding systems share floorspace with workers on shift cycles, and the failure modes of physics-informed automation in high-temperature environments include burns, crush injuries, and toxic exposure. In clinical settings, ETOH’s AI-assisted decision support touches patient triage, diagnostic prioritization, and care pathway recommendations — environments where a miscalibrated inference has consequences that can be irreversible.
These are not risks to be managed by compliance programs alone. They require a foundational commitment to safety as a structural constraint on how Fluxion designs, deploys, and operates its systems — a constraint that takes precedence over deployment timelines, throughput targets, and cost reduction objectives when they are genuinely in conflict.
The safety hierarchy Fluxion applies
Fluxion applies a hierarchy of safety controls in the order below. We apply controls at the highest feasible level before relying on lower levels. A lower-level control is never used as justification for foregoing a higher-level one.


Eliminate the hazard.
The preferred response to a safety risk is to redesign the system so the hazard does not exist. At a Port, this means designing yard automation layouts that create physical separation between autonomous vehicle paths and human-access zones — not relying on proximity sensors to stop equipment before it reaches a worker. In ETOH, it means designing clinical AI outputs that support rather than replace physician judgment on high-stakes decisions — not adding a warning label to an autonomous recommendation.
Engineer controls into the system.
Where elimination is not feasible, safety must be engineered into the physical and software architecture. Autonomous systems must have deterministic safe-state fallback behaviors that activate on sensor failure, communication loss, or anomaly detection — behaviors that do not depend on software logic executing correctly under the same conditions that caused the anomaly. Clinical AI systems must have explicit confidence thresholds below which outputs are flagged rather than acted on, and these thresholds must be set through clinical validation, not through product launch timelines.


Maintain human override as inviolable.
No Fluxion-operated autonomous system — in ports, fabrication, or clinical environments — may be designed or configured to prevent or delay human override of its actions. This principle is unconditional. Operators and clinicians must be able to halt, redirect, or countermand any automated action in real time. The override mechanism is not a feature; it is a hard requirement of deployment authorization. Any system that requires disabling or bypassing human override to achieve its performance targets has a design flaw, not an acceptable tradeoff.
Protect through procedure and training.
Procedural controls — work instructions, access protocols, training requirements — are the last line of defense, not the first. Fluxion’s operations will rely on trained and certified personnel, and will maintain those certifications rigorously. But we will not design systems where correct procedure is the primary mechanism preventing a serious safety incident. If an operation is only safe when workers follow a procedure exactly every time, the system is not safe.


Clinical safety and the specific obligations of ETOH
Clinical AI presents a safety challenge that is qualitatively different from industrial automation. The failure modes are less visible: a biased triage model does not produce a visible crash, but it may systematically deprioritize patient populations in ways that are only detected through outcome data reviewed months later. A diagnostic support system that is well-calibrated on training data but poorly calibrated on the patient population it is actually deployed in will not announce its miscalibration — it will present outputs with the same apparent confidence regardless.
Fluxion’s position on ETOH’s clinical AI deployment is therefore more conservative than its position on industrial automation, where failure modes are more legible in real time. ETOH systems must complete prospective clinical validation in the deployment setting — not retrospective validation on held-out training data — before being used to influence care decisions in that setting. Validation must be conducted against outcomes that matter clinically, not against surrogate metrics that are easier to measure. Validation must include subgroup analysis across the patient populations that will actually use the system, with particular attention to populations that are underrepresented in training data.
The obligation Fluxion accepts in deploying clinical AI is not merely to build a system that performs well on average. It is to understand where the system performs poorly, to disclose this to the clinical operators who rely on it, and to constrain deployment to the contexts where performance is adequate.
Safety data transparency
Fluxion will maintain transparent incident and near-miss reporting across all operating platforms, shared with relevant teams regardless of whether the incident has external disclosure implications. The incentive to underreport safety incidents — because they indicate system failures or create legal exposure — is real, and Fluxion will actively counter it by treating incident reports as valuable operational data rather than as failures to be managed. Personnel who identify and report safety concerns, including near-misses that did not result in harm, are contributing to the institutional learning Fluxion requires to improve its systems over time.

