Week 1 HW: Principles and Practices

Class assignment – designing policy frameworks around bioengineering tools

⭐⭐ Web app with policy homework content ⭐⭐ (external link)

🔗✏️ Proof-of-work links (drafts, AI conversations/prompts used to brainstorm, generate the web app, and other sections) ✏️🔗

HTGAA 2026: Week 1 Governance Framework

Project: DIY Aging Biomarkers Kit for Community Labs
Objective: To lower the single biggest barrier to longevity research (cost) through an affordable, open-source testing kit.

Context: Diseases of aging account for 3/4 of global deaths, yet independent research is gated by prohibitive costs (often $500+ per commercial assay).

Proposed Tool: An accessible, open-source DIY Aging Biomarkers Kit designed specifically for the ~50 community biolabs worldwide. This kit allows independent researchers to measure cellular senescence and other aging hallmarks without relying on expensive institutional supply chains.

*(Bonus: Personal context for why I chose this project 🙋🏼‍♀️)*

I am extremely passionate about the topic of “solving aging” as a way to reduce suffering in the world at scale (as typically called “diseases of aging” ultimately account for somewhere between 1/3 and ~3/4 of all deaths (1) (2)).

With this goal in mind, I decided to fully focus my efforts on biotech, coming from a computational background, and learn hands-on by joining an open community lab (Biopunk Labs in San Francisco – one of the HTGAA nodes) and running my own bioengineering experiments.

(I wrote an article about my first experience at the lab doing wet lab/bioengineering: https://acceleratingutopia.com/my-biotech-journey-pt-i-wet-lab-genetic-engineering/)

For my second experiment I decided to use mammalian cells (fibroblasts) instead of bacteria, as bacteria do not share our mechanisms of aging and are therefore not a good model to study this.

However, I quickly realized how severely limited the scope of what I could do in the lab would be due to the high costs associated with working independently in the lab (without external funding), especially when mammalian cells are involved. Initially, I was planning to replicate a full cellular reprogramming protocol, but quickly learned this would not be a possibility for me due to the cost, so I pivoted to a “partial reprogramming” protocol.

I managed to secure sample cells and some media through donations, thanks to some very kind fellow scientists and mentors who wanted to support such an initiative*. However, none of my work would have any validity unless I could find a reliable way to measure the effects of my interventions on biomarkers classically associated with longevity (see: Hallmarks of Aging), such as senescence.

*(people thought I was crazy for even attempting to run mammalian cell experiments independently, as a beginner, given the costs and difficulty!)

I quickly learned that assay kits would quickly eat up a big portion of my budget, especially given that most companies sell them in bulk, which makes them prohibitively expensive for independent researchers wanting to run just a few small assays.

This whole experience opened my eyes to how non-accessible aging research is at moment.

Given that a big part of my mission is to enable others to help accelerate the eradication of all human disease (with a special focus on longevity), a straightforward way in which I could contribute to this acceleration is by enabling more independent research to happen by lowering the biggest entry barrier: the cost to even get started with simple longevity experiments.


02. Policy Objectives

  1. Democratize Access: Lower financial and institutional barriers so scientists in low-resource settings can meaningfully contribute to aging research.
  2. Prevent Misuse: Ensure that wider access to bioengineering tools does not enable unsafe practices through proactive monitoring.
  3. Ensure Translation: Guarantee that research outcomes meet quality standards sufficient for peer review and clinical translation.

03. Proposed Interventions

A1: Government Subsidies (Incentive)

Actor: Government funding bodies (NIH, ERC)
A dedicated micro-grant program (€5–25K) for community labs to accelerate biomedical progress by enabling new and existing scientists to carry out independent aging research.

  • Risk: Could create a two-tier system where funded labs professionalize and lose grassroots accessibility.

A2: Lab Space Sharing Network (Coordination)

Actor: Academic Institutions, Incubators
An “Airbnb for lab benches” connecting institutions with spare capacity to independent researchers.

  • Risk: Institutions may refuse due to liability concerns.

A3: AI Biosafety Co-Pilot (Technical)

Actor: Open-source developers, DIYbio.org
A software tool that checks experiment plans against biosafety databases, flags risks, and requires mentor sign-off. Serves as a dynamic guardrail.

  • Risk: Over-reliance could reduce researchers’ own safety judgment (the “GPS effect”).

04. Impact Assessment Matrix

CriteriaA1: SubsidiesA2: Lab NetworkA3: AI Co-Pilot
Access DemocratizationHighHighIndirect
Enables new talent
Enables existing researchers
Misuse PreventionMost
Ensures safety standards
Blocks malign applications
TranslationPartialPartial
Incentivizes translation
Directly aids translation
FeasibilityLowMediumHigh

Legend:
✓ = Strong positive contribution
— = Not applicable / No direct impact

Assignment (Week 2 Lecture Prep)

From Professor Jacobson

  1. Nature’s machinery for copying DNA is called polymerase. What is the error rate of polymerase? How does this compare to the length of the human genome. How does biology deal with that discrepancy?

    Polymerase’s error rate is 1:106 (one error per 106 bases), about one in ten million. The human genome is 3.2 Gbp (3.2 * 109), which would then mean that we would expect to see 320 errors per haploid genome copy with no additional repair mechanisms (or double that amount for diploid replication). Yet we see that the actual final mutation rate is closer to 1 in 109, which is multiple orders of magnitude less.

    This is because the body applies additional “proofreading” and repair mechanisms, such as by leveraging mismatch repair proteins that scan newly synthesized DNA (they can recognize helix distortions where bases are mismatched and resynthesize them). Additionally, polymerase itself has a 3’ to 5’ exonuclease activity that removes incorrectly incorporated nucleotides immediately, allowing polymerase to redo the badly-synthesized base section.

  2. How many different ways are there to code (DNA nucleotide code) for an average human protein? In practice, what are some of the reasons that all of these different codes don’t work to code for the protein of interest?

    Given that an average human protein is encoded by 1036 bp, and each codon consists of 3 bases, we can calculate that each protein is approximately 345 amino acids long (1036/3 ≈345).

    Since there are 61 codons that can code for 20 amino acids (64 total codons, but 3 of them are “stop” codons), and multiple codons can code for the same amino acid, a theoretically enormous number of DNA sequences could code for the same protein. Yet in practice we see that only a limited set of codon combinations actually lead to the production of a specific protein, and this is because there are additional biological limitations that dictate these capabilities: certain combinations are more viable (or not viable at all) based on external factors like the availability of matching tRNAs to produce it, or a given organism’s preference for a specific codon sequence, regulatory sequences that might be accidentally created, mRNA secondary structure stability, and translation efficiency.

From Dr. LeProust

  1. What’s the most commonly used method for oligo synthesis currently?
  2. Why is it difficult to make oligos longer than 200nt via direct synthesis?
  3. Why can’t you make a 2000bp gene via direct oligo synthesis?

The current standard for de novo gene synthesis is solid‑phase phosphoramidite chemistry. It is effective but highly noisy (errors accumulate rapidly on each iteration due to approximately 95-99.5% stepwise coupling efficiency), which currently limits the total amount of viable DNA that can be synthesized, as, for a coding gene, a single base error can cause malfunction.

200 nt is the standard limit because at that point most strands have some error, but at a level where it is still feasible to fix it through cloning or purification.

A 2,000 bp gene would require 2,000 single-stranded nucleotides. The most optimistic coupling efficiency per base (99.9%), the accumulation of errors (only about 13.5% of synthesized strands would be full-length and error-free) would make the final product differ too much from the desired sequence to be useful.

We can, however, assemble genes of lengths much longer than the theoretical limitation for the method by synthesizing much smaller and more manageable strands (~50-150 bp) and assembling them through methods like PCR and Gibson assembly. This is the standard for longer DNA synthesis.

From George Church

  1. Using Google & Prof. Church’s slide #4: What are the 10 essential amino acids in all animals and how does this affect your view of the “Lysine Contingency”?

    The 10 essential amino acids are

    • Histidine
    • Isoleucine
    • Leucine
    • Lysine
    • Methionine
    • Phenylalanine
    • Threonine
    • Tryptophan
    • Valine
    • Arginine

    (Note: in humans, arginine is considered conditionally essential and is sometimes included in the list of 10)

    The definition of an “essential amino acid” is that it must be obtained from external sources because the organism cannot produce it on its own, so by this definition, and given that lysine is already found in this category, the “lysine contingency” would be a nonsensical solution (even without additional bioengineering, the dinosaurs would still have to rely on obtaining lysine from external sources like lysine-rich prey and plants, and in the movie they are already shown consuming these).