Subsections of Eduardo Brito Alarcon — HTGAA Spring 2026

Homework

Weekly homework submissions:

Subsections of Homework

Week 1 HW: Principles and Practices

cover image cover image

Sorry for the bad format. I need to hurry for a medical appointment and I started as an onlyoffice file.

  1. Aplication of Bioengineering Application: An Autonomous cell-free expression of viral supra-molecular assemblies. This system relies on automatic synthesis and blockchain technology. It’s purpose is express, and study permutations of complex vral proteins in vitro in an 24.7 operation, generating massive data to train AI models.

Why it is important or risky: This autonomous cell-free expression of viral supra-molecular assemblies push the virology field research and the protein design of proteins, however it can, in principle be used to expl;ore permutations with biological risk (toxins, pathogens) in an automatic and distribution system.

  1. Objectives of governance. Main objective: Promote the beneficial research (Prevention of Harm) and promote equitable, Beneficial Innovation in the use of autonomous expression platforms. Sub-goal 1 (Biosecurity/Biosafety): Prevent the synthesis of known or predicted high-risk viral protein variantes through real-time, integrated control mechanisms. Sub-goal 2 (Transparency & Equity): Ensure the ecosystem for this powerful technology remains auditable,inclusive, and resistant to monopolization—preventing a scenario where only opaque or malicious actors control its advanced capabilities.

  2. Proposed Governance Actions (Evaluated in Four Aspects) Option 1: Mandatory Federal Licensing for High-Level Autonomous Systems (Regulatory Action) · Purpose: Shift from institutional biosafety committees (IBCs) overseeing static research to a federal license requirement for operating any autonomous platform capable of iterative synthesis and testing without human-performed critical checks. Analogous to Select Agent regulations, but focused on the level of autonomy. · Design: Agencies (e.g. CDC, FBI WMD Directorate) define autonomy thresholds. Actors (academic labs, companies) must apply for a license, demonstrating integrated containment (physical, like a ‘robot-in-a-vault’; and digital, like sequence screening software). Required periodic inspections. · Assumptions: A clear, technical threshold for ‘high-risk autonomy’ can be defined; regulators can build relevant technical assessment capacity; the licensing burden won’t stifle beneficial, time-sensitive research. · Risks of Failure & ‘Success’: Failure: Drives development underground or to jurisdictions without such rules, creating and uncontrolled market. ‘Success”: Concentrates technology in large, well-funded institutions, stifling distributed innovation, creating single points of intitutional failure, and exacerbating global inequities in access.

Option 2: Open-Source ‘Pre-Synthesis Peer Varification’ Standard (Community Led Action)

  • Purpose: Create a cultural and technical stanrd where any experimental run pan must be logged to an open-source platform for automated ‘peer’verification (via risk-scanning algorithms) before the robotic system initiates synthesis.
  • Design: An internationa consortium (e.g. Engineering Biology Research Consortium - EBRC) develops the stanrd and a neutral software platform. Actors (Researches) adopt it voluntarily to gain credibility. Publishers and funders mandate its use for related publications and grants.
  • Assumptions: The scientific community values transparency and will adopt it for legitimacy; verification algorithms will be sufficiently accurate to catch risks without excessive false positives athat block legitimate research.
  • Risks of Failure & ‘Success’: Failure: Low adoption, creating a two-tier system where only ‘good actors’ are transparent. ‘Success’: The centralized verification algorithm becomes a single point of failure or a tool for imposing arbitrary, non- consensus-based research restrictions.

Option 3: International Subsidiozed Biosecurity Auditing Corps (Multilateral Action)

  • Purpose: Mitigate the ‘security vs. access’ trade-off by creating a globally accessible corps of expert auditors specializing in autonomous bio-platforms. They provide subsidized or pro-bono biosecurity audits to any lab implementing such systems.
  • Design: Led by a body like the WHO or International Science Council and funded by member states and philanthropic foundations. Actos (labs globally) can request audits to improve protocols and demonstrate commitment to safety, gaining a ‘seal of approval’.
  • Assumptions: Sufficient and sustained funding will be available; labs will see value in voluntary audits; auditors will maintain neutrality and protect intellectual property.
  • Risks of Failure & ‘Success’: Failure: Seen as intrusive bureaucracy, leading to low participation. Success: May create a false sense of security (‘we’re audited, so we are safe’), potentially reducing daily vigilance and nuance risk assessment by researchers.

Scoring: 1 = Strongly Positive/Best for this goal 2 = Neutral/Mixed 3 = Negative/Poor for this goal

Does the option:Op1: Mandatory LicenceOp2:Open-Source StandardOp3: Audit Corps
Enhance Biosecurity/Biosafety
• By preventing incidents122
• By helping respond (tradeability, containment)121
Promote Transparency & Equity
• By preventing monopolies/opacity312
• By ensuring inclusive access311
Other Key Considerations
• Minimize burdens on legitimate research322
• Feasibility & Political Viability211
• Not impede benefical research311
  1. Prioritized Recommendation & Rationale To: The Director of the NIH Office of Science Policy and the Leadership of the Engineering Biology Research Consortium (EBRC)

    I recommend prioritizing the development and implementation of Option 2 (Open-Source Verification Standard) as the core framework, actively supported and scaled by Option 3 (International Audits Corps).

    Why this combination? Option 2 builds the essential technical and cultural infrastructure for transparency from within the research comunity itself. It is the most viable path to creating a ‘safety-by-design-’ norm that is agile, embraced by users, and keeps pace with innovation. However, to ensure glboal equity and robust adoption, it needs the support mechanism of Option 3. The Audit Corps would provide hands-on assistance for labs (especially in under-) resourced settings) to implement the standard effectively, validate their systems, and build trust. This combination fosters a globally inclusive safety culture rather than a restrictive gatekeeping regime.

Trade-off Acknowledged: We are explicity choosing board adoption and embedded safety culture (Options 2+3) over strict, centralized control (Option 1). We accept a margianlly higher theorical risk of a bad actor avoiding the system, in exchange for bringing the vast majority of the global research community into a transparent, collaborative, and peer-verified operating environment. This makes anomalous, proteintially dangerous activity more detectable.

Criticial Assumption & Uncertainty: This model assumes a majority of researchers and institutions are nherently motivated by safety reputation. A key uncertainty is its resilience against state-level or well-funded corporate actors pursuing dual-use or clandestine applications outside the community-based framework.

  1. Personal Ethical Reflection Novel Ethical Convern that Arose: Developing this govenance matrix forced me to confront the ethical delegation of decision- making. The autonomous system transfers operational authority from the human researcher to algorithms (for sequence verification) and automated protocls (for execution). Ths creates a ‘moral attenuation’ problem. If a harmful sequence is erroneously verified and syntheszed, who is responsible? The programmer of the screening algorithm, the manufacturer of the robot or the prncpal investigator who trusted the ‘black box’? This complicates traditional notions of accountability in research ethics.

Proposed Governance Actions to Address this Issue: Any governance sttard for autonomous systems (like Option 2) should mandate a ‘Human-in-the-loop for Critical Thresholds’ protocol. For experiments crossing a pre-defined threshold of complexity or predicted risk, the system must pause and present the human operator with digestible risk assessment and require explicit manual approval for that specific stage. This maintains human moral agency and final jundgment without sacrificing the efficiency of automation for routing tasks. This should be a key criterion for audits under option 3.

Subsections of Labs

Week 1 Lab: Pipetting

cover image cover image

Subsections of Projects

Individual Final Project

cover image cover image

Group Final Project

cover image cover image