Week 1 HW: Principles and Practices
Multimodal Assistive Communication System for Nonverbal Users
Proposed Bioengineering Application
I plan to focus on developing a multimodal assistive communication system for nonverbal users that integrates speech, gestures, and biological signals such as muscle activity or eye movement. The goal of this system is to help individuals with communication disorders express themselves more accurately and autonomously by leveraging whichever input methods are most reliable for them. The reason I want to go through with this topic is because I personally believe that it closely connects to my interests in Human-Computer Interaction and is a project where the concepts of Bioengineering would highly benefit it.
Ethical and Governance Goals
A few key goals from what could be achieved from this project are:
Protecting user autonomy and agency, the system should reflect the user’s true intent, and should not override control over what is trying to be communicated.
Non-malfeasance and safety, minimizing harm that could arise from misinterpretation of biological signals or misuse in high-stakes environments such as hospitals.
A third goal is promoting equity and accessibility, ensuring that the system works across diverse bodies, abilities, and socioeconomic contexts.
These major goals could also be broken down into smaller sub-goals that could go over consent, bias reduction safeguards and more.
Governance Actions
1. Clinical Use Certification (Rule / Requirement)
The primary actors for this would be federal health or regulatory agencies, hospital procurement committees and medical boards
Purpose: Many assistive communication tools are currently released as consumer devices or research prototypes and are sometimes adopted informally in clinical settings without consistent validation. So, a good formal clinical-use certifications for such systems would be useful before they can be used in a high-stakes environment like a hospital ER or ICU. The certification would have to focus on safety, reliability and transparency.
Design: A standard would be developed specifically for communication-assistive systems, and it should include performance benchmarks and some form of precedence and proof of functionality from use in simulated clinical testing. We would have to get federal regulators to define and enforce this standard. Hospitals would have to receive certifications for procurement.
Assumptions: This assumes that regulators can design a proportionate certificate pathway that doesn’t slow innovation, and also that vendors and researchers would be able to afford validation costs, and that hospitals would be able to enforce the procurement standards.
Risks: If certification is too burdensome then nobody would get it. Slow regulatory timelines might push clinicians to use uncertified tools. Certification may also bring about a false sense of safety, possibly leading institutions to neglect training.
2. Mandatory Informed Consent (Ethical requirement)
The primary actors for this would be hospitals, research institutions, disability advocacy organizations.
Purpose: Consent practices for communication technologies can vary widely, and some rely heavily on proxy consent that don’t require observable user assent or confirmation. Standardized informed consent processes should be implemented that ensure that the system reflects the user’s intent.
Design: Institutions would adopt consent workflows adapted to user abilities, such as eye tracking or simple binary signals. We would need confirmation tests that show runtime confirmation, cancellation or correction mechanisms. Hospitals and other organizations should enforce these standards, and vendors should build compliant interfaces.
Assumptions: This is only possible assuming that the user of this device is capable of providing some sort of reliable assent signal, and also that institutions treat consent as an ongoing process rather than a yes or no question. (covered in the next governance action)
Risks: Consent standards may fail if users with highly variable signals cannot reliably confirm an output, which can potentially restrict access to the technology.
3. Safeguards for Ambiguity and Uncertainty (technical strategy)
The primary actors for this would be developers, research labs, companies.
Purpose: Uncertainty is often hidden from users and clinicians. Many assistive systems present signal interpretations as definitive outputs, even when they are sometimes noisy. Systems should be able to explicitly manage uncertainty by requesting confirmation when confidence of a decision is low.
Design: Systems would implement calibrated confidence metrics, and adjustable thresholds based on context, and clear indicators of uncertainty in a decision. Developers would have to build, labs to validate calibrations, and other organizations can support in standardization.
Assumptions: Assumes that confidence estimates can be calibrated, and that users and clinicians can interpret uncertain information correctly.
Risks: Poor calibration can easily mislead users and cause bad decisions. Excessive uncertain information can erode trust and slow communication.
4. Equity and Bias Auditing (standards)
Purpose: Assistive communication systems are often trained on narrow datasets, leading to uneven performance across users. We need to establish regular, standardized audits that evaluate system performance across diverse bodies, abilities, and contexts.
Design: Audits would implement test datasets, and other metrics to quantify a system’s ability. Independent audits could also conduct evaluations.
Assumptions: Assumes diverse and representative datasets can be assembled responsibly, and that vendors approve of third-party evaluations.
Risks: Audits may still be able to miss underrepresented groups or incentivize optimization for benchmarks rather than real results.
Table
| Does the option: | Option 1 | Option 2 | Option 3 | Option 4 |
|---|---|---|---|---|
| Enhance Biosecurity | n/a | n/a | n/a | n/a |
| • By preventing incidents | n/a | n/a | n/a | n/a |
| • By helping respond | n/a | n/a | n/a | n/a |
| Foster Lab Safety | 2 | n/a | 1 | 2 |
| • By preventing incident | 2 | n/a | 1 | 2 |
| • By helping respond | 2 | n/a | 1 | 2 |
| Protect the environment | n/a | n/a | n/a | n/a |
| • By preventing incidents | n/a | n/a | n/a | n/a |
| • By helping respond | n/a | n/a | n/a | n/a |
| Other considerations | ||||
| • Minimizing costs and burdens to stakeholders | 3 | 2 | 2 | 3 |
| • Feasibility? | 2 | 1 | 2 | 2 |
| • Not impede research | 2 | 1 | 2 | 2 |
| • Promote constructive applications | 2 | 1 | 1 | 1 |
Prioritization
I would prioritize the Clinical Use Certification and the Safeguards for Ambiguity and Uncertainty as the most important governance actions for guiding the development and deployment of the multimodal assistive communication system. I believe that achieving these two goals would most directly address the risks of harm that arise when implementing these systems into a medical context.
Certifications may introduce additional costs and slower deployment, but its a price that that can be paid to guarantee a safer product that can provide better communication for more efficient uses of medical resources, and can also prevent serious medical, legal or ethical consequences.
Safeguards for ambiguity and uncertainty are equally important, as they address the confusions that can be caused by the interpretation of biological signals. This governance action reduces the probablility of miscommunications and prevents systems from overstepping user intent. It helps to maintain trust and safety.
The other governance goals are important complementary measures, but without strong safety measures and uncertainty-aware system design, consent mechanisms and audits alone may be insufficient to prevent harm.