Week 1 HW: Principles and Practices

Application: Living Materials with Multicellular Computational Networks for Collective Sensing and Spatial Response

What is it?

I am interested in Computational Living Materials(CLMs): materials embedded with engineered cells that form cell-to-cell communication networks capable of collectively sensing, processing, and responding to environmental stimuli. Unlike existing engineered living materials that react passively to single inputs, it contain genetic logic circuits enabling the material itself to perform distributed spatial computation before producing an output response.

The Skin Analogy

The concept is inspired by how human skin processes touch. When you feel the difference between a pinprick and a palm press, it’s not because individual nerve cells are different but because of the pattern of activation across a network of communicating cells. Your skin doesn’t just sense but also computes spatially; the sensor and the processor are the same system.

Why do I care?

I am curious about the biomaterial equivalent. Cells embedded in a biomaterial substrate would sense their environment and share information with neighbors and collectively produce visible responses that represent processed information. The material includes the entire signal chain of sensor, processor, output into one living substrate. A local stimulus at one point would trigger a cascade that propagates outward, producing a coordinated response with cell-to-cell communication. Different cell populations can play different roles and multi-cell-type signal-processing networks could potentially perform noise filtering, edge detection, and signal amplification.

I work at the intersection of sensor technology, interaction design, and material fabrication. I’m drawn to the idea that computation doesn’t need silicon but emerge from living systems embedded in the materials around us. This could enable environmental monitoring surfaces, responsive architecture, wearable health interfaces, and entirely new categories of living interactive media.


Policy Goals

  1. Enhance Biosecurity

    • Prevent repurposing for harmful applications
    • Ensure genetic designs are traceable
  2. Protect the Environment

    • Prevent engineered organisms from escaping into natural ecosystems
    • Prevent gene transfer to wild microbial populations
    • Ensure safe degradation at end of life
  3. Ensure Predictable Emergent Behavior

    • Ensure collective behaviors are testable and predictable
    • Prevent unintended behavioral drift from cell mutation
  4. Promote Equity, Access & Transparency

    • Prevent biological IP concentration by few corporations
    • Promote open-source genetic designs
    • Require clear labeling of engineered living organisms in materials
    • Establish accountability for harm caused by material

Three Governance Actions

Option 1: Mandatory Dual Biocontainment Standards (Federal Regulatory)

Purpose: Currently, engineered living materials are developed under general lab biosafety rules (BSL-1/2). There are no specific regulations for deploying engineered organisms in materials outside the lab. Propose requiring two independent containment mechanisms: synthetic nutrient dependency & genetic kill switch.

Design: Administered through EPA (environmental release) and FDA (consumer products) using existing biotech frameworks. Manufacturers submit containment data showing organisms degrade within a set timeframe outside intended conditions. Inter-agency review panel evaluates.

Assumptions: Dual containment is achievable at scale (unproven for manufacturing); EPA/FDA have expertise to evaluate this new category (uncertain); lab-tested containment holds in real-world conditions (questionable).

Risks of Failure & “Success”: Too strict → bans development; too loose → inadequate protection. Even “successful” regulation may create false confidence: lab-tested containment may not hold in complex real environments. GMO crop regulation exists, yet gene flow to wild relatives has still occurred.

Option 2: Open-Source Biological Safety Consortium (Community Self-Governance)

Purpose: Top-down regulation often lags behind fast-moving technology. Propose voluntary consortium (modeled on iGEM’s Safety Committee or IETF internet standards) sharing safety standards and open-source containment genetic parts.

Design: “CLM Safety Certified” mark incentivizes membership. Insurance companies require certification for liability coverage. Funded by member dues and government grants (NSF/DARPA). Public registry of deployed organisms. Peer-reviewed safety audits.

Assumptions: Voluntary participation is sufficient (companies may resist open-sourcing); safety mark creates market pressure but public awareness of these materials is near zero; peer review is effective.

Risks of Failure & “Success”: Without legal enforcement, bad actors ignore standards. Dual-use dilemma: publishing how containment works also teaches how to defeat it. Open-source security helps defenders but also informs attackers.

Option 3: Mandatory Behavior Simulation Before Deployment (Government-Funded Technical Strategy)

Purpose: Collective behavior may not be predictable from individual cell specs. Propose mandatory computational simulation before deployment, similar to autonomous vehicle simulation testing.

Design: Agent-based computer models where each virtual cell follows its genetic circuit rules and communication logic. NIH/NIST funds open simulation platforms. Manufacturers must simulate behavior across environmental scenarios, including modeling how mutations over 100+ generations could alter collective behavior.

Assumptions: Models can accurately capture emergent biological behavior; evolutionary drift is somewhat predictable; simulation infrastructure is affordable.

Risks of Failure & “Success”: Inaccurate simulation models provide false confidence. Favors large companies with computational resources (equity problem). Over-reliance on simulation reduces investment in physical containment.

How do the options compare?

Does the option:Mandatory BiocontainmentOpen-Source ConsortiumBehavior Simulation
1 · Enhance Biosecurity
• Prevent repurposing for harmful applications223
• Ensure genetic designs are traceable113
2 · Protect the Environment
• Prevent organism escape into ecosystems123
• Prevent gene transfer to wild populations223
• Ensure safe end-of-life degradation123
3 · Ensure Predictable Emergent Behavior
• Ensure collective behaviors are testable321
• Prevent behavioral drift from mutation221
4 · Promote Equity, Access & Transparency
• Prevent biological IP concentration312
• Promote open-source genetic designs312
• Require clear labeling of living organisms123
• Establish accountability for harm123
Other considerations
• Minimizing costs and burdens to stakeholders313
• Feasibility213
• Not impede research312
• Promote constructive applications212
Scores: 1 = best, 2 = moderate, 3 = poor

Prioritization and Trade-offs

Option 2 as Foundation, with Elements of Options 1 and 3

I would prioritize Option 2 (Open-Source Safety Consortium) as the primary governance framework, supplemented with targeted elements from the other two options.

Why Option 2 as the foundation: CLM technology is at developmental stage and heavy top-down regulation (Option 1) would likely stifle research before it can demonstrate its potential benefits. However, development should happen within a safety-conscious community framework. The open-source consortium has precedent in synthetic biology (iGEM’s safety practices, the BioBricks Foundation) and in technology broadly (IETF for internet standards, Linux Foundation for open-source software). It promotes both safety and equity simultaneously. would prioritize Option 2 (Open-Source Safety Consortium) as the primary governance framework, supplemented with elements from the other two options.

Supplemented with:

  • From Option 1: As CLMs approach consumer deployment (likely 5-10 years away), formal regulatory standards should be developed in collaboration with the consortium. The consortium’s standards would inform regulation rather than being replaced by it. This staged approach mirrors how 3D printing governance evolved, early community self-governance followed by targeted regulation as the technology matured.
  • From Option 3: The consortium should invest in developing open-source simulation tools for emergent behavior prediction, not as a mandatory gate, but as a shared design tool that helps researchers anticipate and avoid dangerous emergent behaviors during the design phase. Making simulation tools open and accessible avoids the equity problems of mandating them.

Key Trade-offs

The core trade-off is between safety and innovation speed. Option 1 maximizes safety assurance but minimizes innovation velocity. Option 2 maximizes innovation velocity but relies on voluntary compliance for safety. Hybrid attempt balances both by scaling governance intensity with technology maturity, light-touch community governance now, harder rules later when the stakes are higher.

A second trade-off is between openness and dual-use risk. Publishing open-source containment mechanisms helps everyone build safer materials but also helps bad actors understand how to defeat it. The benefit of openness outweighs the dual-use risk at this stage, because the technology is too immature, and the safety community benefits enormously from shared tools. This could change as the technology advances.

Audience Considerations

  • Local (MIT/Cambridge): The MIT Institutional Biosafety Committee (IBC) should develop specific guidelines for research, including containment protocols for materials that might be taken outside the lab as final projects.
  • National (NIH/NSF): Federal funding agencies should support development of open-source safety tools and simulation platforms, and require funded research to register with the proposed consortium.

Uncertainties

  1. Can emergent behavior in CLMs be made reliably safe, or is unpredictability an inherent and irreducible feature of networked living systems?
  2. Will the public accept living organisms in their everyday materials? Social acceptance may be a larger barrier than technical or regulatory challenges.
  3. How will CLMs interact with natural microbial ecosystems in ways we cannot predict? The history of introduced species suggests we should be humble about our ability to predict ecological consequences.
  4. As AI and biological computation converge, will CLMs raise questions about material “agency” that our current ethical frameworks are not equipped to handle? If a material makes a collective decision that harms someone, the question of responsibility may require entirely new ethical or legal frameworks.

References: Jacobson gene synthesis lecture (HTGAA Week 1, Slides 1-5: cell as computer, Slide 14: MutS error correction, Slide 35: genomically recoded organisms, Slide 45: molecular beacons, Slide 46: swigRNA sensors, Slide 59: bioFPGA); Pataranutaporn et al. “Living Bits” (AHs 2020); Oxman et al. “Hybrid Living Materials” (Adv. Funct. Mater. 2019); Basu et al. “Synthetic multicellular system for programmed pattern formation” (Nature 2005, Weiss Lab); DARPA Engineered Living Materials program; Walker et al. “Self-pigmenting textiles” (Nature Biotechnology 2024); Imperial College quorum sensing spatial computation (ACS Synth. Bio. 2024); Wang et al. “Engineering Microbial Consortia as Living Materials” (ACS Synth. Bio. 2024).

(AI Prompt: I’m interested in computational living materials where engineered cells communicate to collectively process information. Help me understand whether this is novel compared to existing engineered living materials, multicellular computation, and living bits research; What are the ethical and biosafety concerns specific to this development to be used outside of labs? Is it possible and help me develop specifc policy and goals; What is the iGEM safety model and how does it compare to IETF-style self-governance? Could something similar work for living materials? What is agent-based modeling in biology? Could it predict emergent behavior in multicellular systems? I need to score my governance options against my policy goals but I’m new to biology, can you help me evaluate which approaches are strongest for things like preventing gene transfer or ensuring traceability? What are the trade-offs between open-sourcing safety mechanisms and dual-use risk in synthetic biology?)


Week 2 Lecture Preparation

Homework Questions from Professor Jacobson:

Nature’s machinery for copying DNA is called polymerase. What is the error rate of polymerase? How does this compare to the length of the human genome. How does biology deal with that discrepancy?

Error rate of polymerase is 1:10⁶. Compared to the length of the human genome with 3.2 billion bases, it is roughly 3,200 errors per cell division. Biology deals with this through the error correction Mismatch Repair (MMR) System such as MutS repair system, additional DNA repair pathways such as Nucleotide excision repair, genetic code redundancy and diploid genomes.

(AI Prompt: what are ways of error correction biology uses to prevent mutations in the human genome beyond the polymerase’s built-in proofreading)

How many different ways are there to code (DNA nucleotide code) for an average human protein? In practice what are some of the reasons that all of these different codes don’t work to code for the protein of interest?

The average human protein is encoded by ~1,036 base pairs of DNA (slide 6). Since each amino acid is encoded by a codon of 3 nucleotides, this corresponds to 1,036 ÷ 3 ≈ 345 amino acids. There are roughly 3³⁴⁵ ways for a protein of 345 amino acids. In practice, most of those sequences won’t work because of mRNA folding, codon usage bias (cells prefer certain codons), and accidental creation of regulatory regions.

(AI prompt: why are there not 3³⁴⁵ ways to code for the protein in biological reality)

Homework Questions from Dr. LeProust:

What’s the most commonly used method for oligo synthesis currently?

Phosphoramidite chemistry column-based and silicon chip-based DNA synthesis.

Why is it difficult to make oligos longer than 200nt via direct synthesis?

In the chemistry synthesis as the open loop protection system, yield is dropping as the number of coupling increases. (1 - 1/N)^N and the error rate is fixed at 1 out of 100 which is 1% for each one to be correct. With 99% coupling efficiency per step, the yield for a 200nt oligo is 0.99²⁰⁰ ≈ 13% usable ones because the error rate is fixed but the chain keeps getting longer. The longer it gets the less usable ones in percentage due to exponential yield decay.

Why can’t you make a 2000bp gene via direct oligo synthesis?

The cost and error rate is really high. With the error rate of 1:10², 20bp out of 2000bp will have error on average. The probability of getting something usable without error is 0.99²⁰⁰⁰ which is essentially impossible.

Homework Question from George Church: [Lecture 2 slides]

What are the 10 essential amino acids in all animals and how does this affect your view of the “Lysine Contingency”?

The 10 amino acids that animals cannot synthesize and must obtain from their diet are PVT TIM HALL: Phenylalanine, Valine, Threonine, Tryptophan, Isoleucine, Methionine, Histidine, Arginine, Leucine, Lysine. The “Lysine Contingency” from Jurassic Park is for biocontainment: if the dinosaurs ever escaped the island, they’d die without their lysine supplements from the park staff. The Lysine Contingency does not make sense because animals including us cannot produce their own lysine but get lysine from their diet, which means that the dinosaurs can freely obtain lysine the way all animals do — eat lysine-containing plants or animals.

(AI prompts: please explain in detail what are the 10 essential amino acids in all animals and what do they do? What is “lysine contingency”?)