Week 1 HW: Principles and Practices

HTGAA — Governance of Biological Engineering
Question 1: Biological Engineering Application
Describe a biological engineering application or tool you want to develop and why.
I’m interested in lab automation and, even further than that, experimental automation of entire protocols. I’m interested in the idea of a cloud lab—perhaps even an orchestration layer that coordinates instruments, sensors, liquid handling, and sample movement.
Cloud labs seem most suitable for directed evolution and combinatorial library screening. Running a large-scale mutagenesis campaign today (evolving an enzyme to accept a non-natural substrate, or engineering a biosensor with a tighter dose-response curve) takes months of plate pouring, colony picking, and activity assays. A cloud lab tighten up the design-build-test loop to catch up with the new biomolecular design tools AlphaFold, Boltz, etc.. I’m also curious about using automated platforms for continuous culture experiments where the machine adjusts selection pressure based on live OD and fluorescence readings to cut down on sample transfer.
Question 2: Governance / Policy Goals
Describe one or more governance/policy goals related to ensuring that this application or tool contributes to an “ethical” future. Break big goals down into specific sub-goals.
Autonomously running directed evolution on arbitrary protein libraries, or maintain continuous cultures under automated selection, creates biosafety problems that don’t have good precedents. The overarching goal is non-malfeasance: making sure the platform is not used to evolve pathogens, toxins, or antibiotic-resistance determinants without oversight. three possible sub-goals.
Sub-goal A: Biosafety Screening & Access Control
Every submitted protocol should pass automated biosafety review before execution. For directed-evolution workflows, this means screening the parent sequences and the mutagenesis strategy itself. A random mutagenesis campaign on a botulinum toxin gene should get flagged even if no single variant yet exists in a threat database. For continuous-culture experiments, the system should flag protocols that select for traits like aerosol stability or broad-spectrum antibiotic resistance. Higher-risk experiments should require additional credentialing or sign-off from an institutional biosafety committee (IBC).
Sub-goal B: Auditability & Reproducibility
All experiments must produce immutable, timestamped logs: not just what reagents were dispensed, but the full evolutionary trajectory. Which variants were selected at each round, what selection pressures were applied, and what genotypic diversity looked like over time. This matters for safety investigations, but it also matters for science. Reproducibility is notoriously poor in directed-evolution studies, where small procedural differences (library size, bottleneck severity, selection stringency) can shift outcomes entirely.
Sub-goal C: Equitable Access
Cloud labs lower the barrier to sophisticated protein engineering and adaptive evolution experiments. But they also concentrate capability in whoever controls the platform. A governance goal is to make sure academic labs, community bio spaces, and researchers in lower-resource settings can actually use these tools, rather than being priced out or locked behind a single commercial provider’s terms.
Question 3: Governance Actions
Describe at least three potential governance actions, considering Purpose, Design, Assumptions, and Risks.
Action 1: Mandatory Protocol Screening (Regulatory Requirement)
Purpose: Gene-synthesis companies already voluntarily screen DNA orders against databases of known pathogen sequences through the IGSC Harmonized Screening Protocol. But nothing comparable exists for what you do with those sequences once you have them. A directed-evolution campaign on a toxin gene, or a continuous-culture experiment selecting for resistance phenotypes, wouldn’t trigger any existing screen. This action would extend screening obligations to cloud-lab platforms so that both the starting materials and the experimental logic get reviewed before execution.
Design: A federal agency (likely through an updated Executive Order on biosecurity) would mandate screening for any platform executing biological experiments on behalf of remote users. Cloud-lab operators would build the screening engine. You can’t just BLAST a protocol, though. You need rules that reason about what a mutagenesis campaign or selection scheme could produce, not just what it starts with. A federally funded working group of evolutionary biologists, biosafety officers, and platform engineers would maintain and update the threat models. IBCs would keep override authority for their institutions.
Assumptions: Can dangerous experimental intent actually be detected from a protocol description? A mutagenesis library targeting improved catalytic activity on an industrial substrate looks almost identical to one targeting improved activity on a nerve-agent precursor. The difference is in the substrate you screen against, and a bad actor could lie about that. There is also the question of international coordination: a U.S.-only mandate just pushes the problem to offshore platforms.
Risks of Failure & “Success”: If screening is too coarse, it catches nothing useful. If too aggressive, it blocks legitimate work. Imagine a postdoc whose enzyme-evolution run gets held up for two weeks because the parent gene shares 40% sequence identity with a known toxin. A “successful” regime carries its own risk: people stop thinking critically about biosafety because they assume the automated screen caught everything.
Action 2: Open Audit-Log Standard for Evolutionary Trajectories (Industry / Community Technical Strategy)
Purpose: Lab notebooks and LIMS vary wildly across institutions. Directed-evolution experiments are especially hard to reproduce because small procedural choices (library diversity, selection stringency, bottleneck severity) compound across rounds. This action proposes an open, machine-readable log format for automated biology platforms, something like an SBOM (Software Bill of Materials) but for evolution campaigns. Every round of selection, every passage in a chemostat, and every plate screen would be recorded in a common schema.
Design: An industry consortium, modeled on the Allotrope Foundation for analytical data standards, would develop the schema. Cloud-lab companies, instrument manufacturers, and reagent suppliers would opt in. Journals could require log submission alongside manuscripts reporting evolved proteins or strains. NIH and NSF could make compliance a condition of funding. The real incentive is scientific: if your evolution logs are in a standard format, other groups can reproduce and build on your work instead of starting over.
Assumptions: A single schema would need to capture workflows as different as error-prone PCR libraries, PACE (phage-assisted continuous evolution), and long-term chemostat adaptation. That is a hard standardization problem. Companies may also resist sharing operational detail that overlaps with proprietary methods.
Risks of Failure & “Success”: If the standard is too rigid or burdensome, small labs and community bio spaces won’t adopt it, and it becomes compliance paperwork for well-funded groups only, making the equity gap worse. There is also a dual-use concern: detailed public logs of how a pathogen-adjacent protein was evolved could serve as a recipe for someone with bad intentions. This is the same tension software security faces with detailed CVE disclosures that get exploited before patches ship.
Action 3: Tiered Access by Biological Risk Level (Incentive-Based / Credentialing)
Purpose: Some cloud-lab providers today will run almost anything for anyone with a credit card. This action proposes a tiered access model. Drone operators in the U.S. need FAA Part 107 certification for commercial flights; researchers working with select agents need CDC/USDA registration. A similar structure could apply here, organized by biological risk. Tier 1 might cover routine cloning and expression in standard lab strains (BSL-1 equivalent). Tier 2 could cover directed evolution on non-pathogenic targets or adaptive evolution of GRAS organisms. Tier 3 would cover work involving known virulence factors, dual-use research of concern, or evolution campaigns where the target function has clear misuse potential.
Design: Cloud-lab operators would enforce the tiers. Tier definitions would come from professional societies (ASM, ABSA) working with regulators. A credentialing body, possibly housed within ABSA or iGEM, would verify qualifications. For Tier 1, self-certification and a short training module might suffice. Tier 2 would require institutional affiliation and IBC approval. Tier 3 would need government review. Credentialing costs for academic and community users should be subsidized to preserve the equity goal.
Assumptions: Biological risk does not always map neatly onto tiers. Dual-use potential is context-dependent: evolving a lipase for industrial detergent is Tier 1 until someone realizes the same enzyme degrades a polymer used in biocontainment. The model also assumes institutional affiliation correlates with responsible use, which is not always the case.
Risks of Failure & “Success”: Regulatory capture is the obvious failure mode. Incumbents could set tier boundaries to protect their own positions and lock out newcomers, especially community labs and independent researchers. A “successful” tiering system could split research into two tracks: credentialed groups at major universities with full access to automated evolution, and everyone else stuck at Tier 1. That would undermine the broad access that makes cloud labs worth pursuing.
Question 4: Scoring Governance Actions Against Policy Goals
Score each governance action (1 = best, 3 = weakest, n/a) against each policy sub-goal.
| Governance Action | Biosafety Screening & Access Control | Auditability & Reproducibility | Equitable Access |
|---|
| Action 1: Mandatory Protocol Screening | 1 | 2 | 3 |
| Action 2: Open Audit-Log Standard | 2 | 1 | 2 |
| Action 3: Tiered Access Licensing | 1 | 3 | 2 |
Mandatory screening (Action 1) targets biosafety most directly but scores worst on equity because compliance costs fall disproportionately on smaller groups. The audit-log standard (Action 2) does the most for reproducibility, especially for directed-evolution work where procedural details determine outcomes, but only indirectly supports biosafety. Tiered licensing (Action 3) is strong on access control by design, but credentialing alone does not guarantee transparent record-keeping, so it scores lowest on auditability. On equity, Actions 2 and 3 both land in the middle. Each can be designed with equity provisions, but each also risks creating new barriers if implemented carelessly.
Prompt used in Claude 4.6: Format my answers to theses question in HTML in order to paste into the body of an .html. Restate the question and copy my answer. Edit and tighten up where appropriate. I will edit and approve the final product.
Include a note with this prompt and model used at the end.