Week 1 HW: Principles and Practices

cover image cover image
Does the option:Option 1Option 2Option 3
Enhance Biosecurity
• By preventing incidents
• By helping respond
Foster Lab Safety
• By preventing incident
• By helping respond
Protect the environment
• By preventing incidents
• By helping respond
Other considerations
• Minimizing costs and burdens to stakeholders
• Feasibility?
• Not impede research
• Promote constructive applications

Subsections of Week 1 HW: Principles and Practices

Week 1 HW: Principles and Practices

Bioengineering Application: CRISPR-Based Dengue Rapid Diagnostic

What I Want to Develop

I propose developing a low-cost, CRISPR-based rapid diagnostic test for dengue virus that can be deployed at the point-of-care in clinics, pharmacies, and community health centers without requiring laboratory equipment.

How it works: The diagnostic uses CRISPR-Cas13 technology (similar to the SHERLOCK diagnostic platform) in a paper-based lateral flow format (like a pregnancy test). It detects dengue viral RNA directly from a finger-prick blood sample, provides results in 30-45 minutes at room temperature, can differentiate between dengue serotypes (DENV-1, DENV-2, DENV-3, DENV-4), and has a target cost of less than $2 per test.

Why This Matters

Bangladesh Context:

Dengue is endemic in Bangladesh with devastating seasonal outbreaks, especially during monsoon season. The 2023 outbreak was one of the worst in history with over 321,000 cases and 1,600 deaths. Current diagnostics (NS1 antigen, IgM/IgG antibodies) cost $5-15, require laboratory facilities, and take hours to days for results. Early detection is critical because dengue can progress to severe hemorrhagic fever within hours, yet most vulnerable populations—urban slum dwellers and rural communities—lack access to timely testing. For daily wage earners making $2-5 per day, current test costs are prohibitive.

Global Relevance:

Dengue affects 400 million people annually worldwide and is endemic in over 100 countries across Southeast Asia, Latin America, Africa, and the Pacific Islands. Climate change is expanding dengue’s geographic range, and the disease disproportionately affects the poorest populations who have the least access to healthcare infrastructure.

Healthcare Access Gap:

Existing tests are too expensive for low-income communities, laboratory infrastructure is lacking in rural and remote areas, delayed diagnosis leads to worse outcomes and unnecessary hospitalizations, and while there is no specific treatment for dengue, early supportive care significantly reduces mortality.


Governance and Policy Goals

Primary Goal: Ensure Equitable Access and Patient Safety

Sub-goal 1: Biosafety and Biosecurity

  • Prevent misuse of CRISPR diagnostic platform technology
  • Ensure safe handling and disposal of biological samples
  • Prevent accidental environmental release of engineered components

Sub-goal 2: Clinical Safety and Accuracy

  • Minimize false negatives (missing dengue cases leads to deaths)
  • Minimize false positives (causes unnecessary treatment, anxiety, and healthcare costs)
  • Ensure quality control across manufacturing batches

Sub-goal 3: Equitable Access

  • Ensure affordability for low-income populations
  • Enable local manufacturing and distribution (not just import dependence)
  • Prevent price gouging or intellectual property monopolies
  • Reach rural and remote communities, not just urban centers

Sub-goal 4: Trust and Informed Use

  • Build community trust in the technology
  • Train healthcare workers properly
  • Ensure patients understand results and next steps
  • Protect patient data and privacy

Three Governance Actions

Governance Action 1: Regulatory Pathway (Government/WHO)

Purpose:

Currently, new diagnostics require full regulatory approval (like drugs), which takes years and costs millions of dollars. This makes affordable diagnostics economically unviable for low-resource settings. I propose creating a streamlined “Emergency Use Authorization” pathway specifically for low-cost diagnostics targeting endemic diseases in low and middle-income countries. This would be modeled after the WHO Prequalification Program but with faster timelines.

Design - What’s Needed:

WHO creates guidelines and standards, national regulatory agencies (Bangladesh DGDA, Indian CDSCO, etc.) adopt the pathway, academic and industry developers submit applications, and independent labs conduct validation studies. Key components include minimum performance standards (for example, at least 90% sensitivity and 95% specificity), expedited review timeline (6 months versus 2-3 years), lower fees for non-profit and academic developers, post-market surveillance requirements, and conditional approval with real-world monitoring.

Assumptions (What Could Be Wrong):

This assumes faster approval will not compromise safety or accuracy. It assumes countries have capacity to conduct post-market surveillance. It assumes reduced bureaucracy will not create loopholes for low-quality products. It also assumes WHO recommendations will actually influence national regulatory agencies.

Risks of Failure and “Success”:

Failure risks include low-quality diagnostics flooding the market leading to patient harm and loss of trust, regulatory capture by industry weakening standards over time, and poor countries still being unable to afford validation studies. “Success” risks include creating a two-tier system with lower standards for poor countries (raising equity concerns), brain drain as researchers focus only on “easy approval” products, and established diagnostic companies lobbying against the pathway because it threatens their profits.


Governance Action 2: Open-Source Technology Commons (Academic/Technical)

Purpose:

Currently, CRISPR diagnostic intellectual property is fragmented across universities and companies (Broad Institute, Sherlock Biosciences, etc.), creating licensing barriers and high costs. I propose creating an open-source CRISPR diagnostics platform with freely available protocols, reagent recipes, and design tools. This would be similar to the OpenPCR or OpenTrons model.

Design - What’s Needed:

Academic researchers publish protocols openly, funders (Gates Foundation, Wellcome Trust) require open-access as a grant condition, community labs and makerspaces test and validate the platform, and local manufacturers in Bangladesh, India, and other countries can produce diagnostics without licensing fees. Key components include a GitHub repository with validated protocols, pre-designed guide RNAs for common pathogens (dengue, malaria, etc.), community-driven quality control standards, training materials in local languages, and an online forum for troubleshooting.

Assumptions (What Could Be Wrong):

This assumes open-source will not compromise quality (since there is no profit motive for quality control). It assumes local manufacturers have the technical capacity to produce diagnostics. It assumes intellectual property holders will actually participate (they may not). It also assumes that “free” means “accessible” when equipment and training are still needed.

Risks of Failure and “Success”:

Failure risks include poor quality control leading to variable performance across batches, lack of technical support resulting in improper use, no single entity being accountable for problems, and the platform being underfunded so it cannot keep pace with new viral variants. “Success” risks include killing the commercial incentive and reducing innovation in the diagnostics field, dual-use concerns where open protocols could be repurposed for harmful applications, creating dependency on foreign technical expertise, and regulatory agencies possibly not approving “community-developed” products.


Governance Action 3: Community Health Worker Training Plus Mobile Verification System (Local Implementation)

Purpose:

Currently, diagnostics are administered by doctors or nurses in clinics, which limits reach. Even when tests are available, results may be misinterpreted or not lead to appropriate action. I propose training community health workers (CHWs) to administer and interpret dengue diagnostics, supported by a mobile app-based verification system to ensure quality and track epidemiology.

Design - What’s Needed:

The Ministry of Health trains and certifies CHWs, NGOs (BRAC, Grameen) deploy the program through existing networks, mobile app developers create the verification tool, and local clinics provide backup support for positive cases. Key components include a 2-day training program for CHWs on proper test administration and a mobile app that guides the CHW through testing steps with photos, scans the test strip and validates results using AI and image recognition, uploads geotagged data to a central epidemiology database, triggers alerts for dengue hotspots, creates a referral pathway to clinics for confirmed cases, and provides payment or incentives for CHWs per test administered.

Assumptions (What Could Be Wrong):

This assumes CHWs can be trained adequately (versus requiring medical professionals). It assumes mobile network coverage exists in rural areas. It assumes patients trust CHWs (versus doctors). It assumes app-based verification prevents errors (but technology can fail). It also assumes data privacy can be protected.

Risks of Failure and “Success”:

Failure risks include CHWs misusing tests leading to inaccurate results, the app failing in low-connectivity areas, patients ignoring positive results if treatment is not accessible, data breaches causing privacy violations, and CHWs becoming overburdened with new responsibilities. “Success” risks include creating surveillance infrastructure that could be misused (government tracking), over-reliance on technology leading to deskilling of clinical judgment, creating a two-tier healthcare system (CHWs for poor, doctors for rich), data being sold or exploited by tech companies, and a false sense of security if test accuracy is not maintained.


Scoring the Governance Actions

Does the option:Option 1: Regulatory PathwayOption 2: Open-Source PlatformOption 3: CHW Training + App
Enhance Biosecurity
• By preventing incidents2 (has standards but expedited)3 (open access = dual-use risk)N/A
• By helping respond2 (enables faster deployment)1 (rapid adaptation to threats)1 (real-time surveillance data)
Foster Lab Safety
• By preventing incident2 (some oversight maintained)3 (variable quality control)N/A (point-of-care)
• By helping respond1 (post-market surveillance)2 (community reporting, slower)N/A
Protect the Environment
• By preventing incidents1 (requires disposal protocols)2 (less oversight of disposal)1 (app includes disposal guidance)
• By helping respondN/AN/AN/A
Other Considerations
• Minimize costs/burdens2 (still needs validation studies)1 (no licensing fees)2 (needs training infrastructure)
• Feasibility?1 (leverages existing WHO)2 (needs sustained funding)1 (builds on existing CHW networks)
• Not impede research1 (actually accelerates)1 (removes IP barriers)N/A
• Promote constructive uses1 (enables affordable diagnostics)1 (enables local innovation)1 (expands healthcare access)

Scoring: 1 = best, 2 = moderate, 3 = worst, N/A = not applicable


Recommendation: Pursue a Combined Approach

After evaluating these governance options, I recommend pursuing a hybrid strategy that combines regulatory reform with community implementation, while incorporating open-source principles where appropriate.

Primary Strategy: Fast-Track Regulatory Pathway (Option 1) Plus CHW Training System (Option 3)

The regulatory pathway is essential because even the best technology means nothing if it cannot be legally deployed. By creating an expedited approval process for low-cost diagnostics targeting endemic diseases, we remove a major barrier to access while maintaining necessary safety standards. The post-market surveillance requirement actually strengthens safety by catching real-world issues that clinical trials might miss.

However, regulatory approval alone does not ensure the diagnostic reaches those who need it most. This is where the community health worker training system becomes critical. By leveraging existing CHW networks in Bangladesh (such as those run by BRAC and Grameen), we can extend the reach of dengue diagnostics to rural and underserved urban communities. The mobile verification app addresses the quality control concerns that arise when moving diagnostics outside traditional healthcare settings, while simultaneously creating valuable epidemiological data for outbreak response.

Incorporating Elements of Option 2: Selective Open-Source Components

Rather than making the entire diagnostic platform open-source (which risks quality control issues and biosecurity concerns), I propose a middle ground. We should open-source the guide RNA sequences for dengue serotypes and basic protocols, while maintaining proprietary control over critical quality-control elements and manufacturing processes. This allows local manufacturers to produce the diagnostic affordably while ensuring consistency and preventing misuse.

Trade-offs Considered

The expedited regulatory pathway does create some risk of lower-quality products entering the market if standards are weakened over time. To mitigate this, I would recommend that the performance benchmarks be set by an independent scientific panel rather than regulatory agencies alone, and that these standards undergo regular review based on post-market surveillance data.

The CHW training system creates potential surveillance concerns, particularly around patient data privacy. To address this, the mobile app should be designed with privacy-by-default principles. Data should be anonymized at the point of collection, stored locally on devices when possible, and encrypted during transmission. Patients should be informed about data collection and have the option to opt out of the epidemiological tracking while still receiving diagnostic services.

The partial open-source approach risks not going far enough to enable true local innovation. However, I believe protecting certain quality-control elements is necessary in the short term to build trust in CRISPR-based diagnostics. As the technology matures and community labs develop greater technical capacity, more components could be opened over time.

Assumptions and Uncertainties

This recommendation assumes that national regulatory agencies in endemic countries will be willing to adopt WHO-recommended expedited pathways, which is not guaranteed given concerns about sovereignty and varying risk tolerances. It also assumes that CHWs can be trained adequately to administer these tests, which may require more robust training than the two-day program I proposed.

Perhaps the biggest uncertainty is whether a diagnostic alone, without accompanying improvements in clinical care capacity, will actually improve health outcomes. If patients cannot access treatment after a positive diagnosis, we may simply be creating anxiety without benefit.

Target Audience: WHO Director-General

I am directing this recommendation to the WHO Director-General because WHO has the convening power to bring together national regulatory agencies, set international standards, and mobilize funding. The WHO Prequalification Program provides a proven model that could be adapted for this expedited pathway, and WHO’s existing relationships with CHW programs in endemic countries position it well to pilot the implementation strategy.


Ethical Concerns from Week 1

This week’s lectures and discussions raised several ethical concerns that were new to me or that I had not previously considered deeply.

The “Two-Tier” Problem

One concern that struck me was the ethical tension in creating expedited regulatory pathways specifically for low-resource settings. While the intention is to improve access, there is a risk of implicitly accepting lower standards for poor countries compared to wealthy ones. This raises questions of justice and dignity. Are we saying that a less accurate diagnostic is “good enough” for Bangladesh but not for the United States? How do we balance the utilitarian benefit of some access versus the principle that all humans deserve equal quality healthcare?

Dual-Use Dilemma with Accessible Technology

The discussion of making biotechnology more accessible through open-source approaches and simplified tools brought into focus the dual-use dilemma. The same technologies that could enable life-saving diagnostics in community labs could potentially be misused. I had not fully appreciated how democratizing access to tools like CRISPR creates governance challenges that traditional laboratory biosafety frameworks were not designed to address. There is no easy answer here because restricting access to protect against misuse also restricts beneficial applications.

Surveillance and Trust

The idea of using mobile apps to verify diagnostic quality and track disease outbreaks seems beneficial from a public health perspective, but this week’s discussions made me think more carefully about surveillance implications. In contexts where governments may use health data for other purposes (immigration enforcement, political control), even well-intentioned surveillance systems could be misused. This is particularly relevant in countries with less robust data protection laws. How do we build epidemiological surveillance systems that genuinely protect privacy while still being useful?

Proposed Governance Actions to Address These Concerns

To address the two-tier problem, I propose that any expedited regulatory pathway should require public transparency about the trade-offs being made. If performance standards are different for resource-limited settings, this should be clearly stated and justified, not hidden. Additionally, there should be a clear pathway for upgrading standards as capacity improves, so the “expedited” pathway does not become permanently inferior.

For dual-use concerns with accessible diagnostics, I think technical design choices matter enormously. For the dengue diagnostic, the guide RNAs could be designed to be highly specific to dengue serotypes and unlikely to work for other pathogens of concern. Publishing these specific sequences is lower risk than publishing a general platform that could be easily adapted. There should also be community-developed norms (similar to the iGEM safety and security guidelines) about what types of sequences and protocols are appropriate to openly share.

Regarding surveillance, I believe the governance action should focus on building opt-in systems with genuine informed consent, rather than mandatory reporting systems. The mobile verification app could be designed to provide value to the healthcare worker and patient (guidance, decision support) even without transmitting data centrally. When data is collected, there should be independent oversight (perhaps by civil society organizations) of how it is used, with strict prohibitions on secondary uses.


Week 2 Lecture Prep: Homework Questions

Professor Jacobson’s Questions

Question 1: Nature’s machinery for copying DNA is called polymerase. What is the error rate of polymerase? How does this compare to the length of the human genome? How does biology deal with that discrepancy?

DNA polymerase has an intrinsic error rate of approximately 1 mistake per 100,000 nucleotides (10^-5) during DNA synthesis without any correction mechanisms. This might sound accurate, but when we consider the human genome contains approximately 3 billion base pairs (6 billion in diploid cells), this would result in about 120,000 errors every time a cell divides if left uncorrected.

Biology has evolved sophisticated mechanisms to deal with this discrepancy through a multi-layered error correction system. First, many DNA polymerases have intrinsic 3’ to 5’ exonuclease activity that provides immediate proofreading. During synthesis, if the polymerase incorporates an incorrect nucleotide, the enzyme can detect the mismatch, reverse direction, excise the incorrect base, and try again. This proofreading reduces the error rate to approximately 10^-7 per base pair.

Second, there is a post-replication mismatch repair (MMR) system that scans newly synthesized DNA for errors that escaped proofreading. This system recognizes distortions in the DNA helix caused by mismatched base pairs, excises the error, and re-synthesizes the correct sequence. With both proofreading and mismatch repair functioning together, the final error rate in replication is reduced to approximately 10-9 to 10-10 per base pair per cell division, which translates to only about 0.1 to 1 mutations per human genome per generation. This remarkable fidelity is essential for maintaining genomic stability and preventing the accumulation of harmful mutations that could lead to cancer or other diseases.

Question 2: How many different ways are there to code (DNA nucleotide code) for an average human protein? In practice what are some of the reasons that all of these different codes don’t work to code for the protein of interest?

The genetic code is degenerate, meaning that multiple DNA sequences can encode the same protein. With 4 nucleotide bases (A, T, G, C) and codons being triplets, there are 64 possible codons but only 20 standard amino acids. For an average human protein of approximately 300 amino acids, the theoretical number of different DNA sequences that could encode it is astronomically large due to codon redundancy (synonymous codons).

However, in practice, not all of these theoretically possible DNA sequences work equally well or at all to code for the protein of interest. Here are the main reasons:

Codon usage bias: Different organisms, and even different tissues within an organism, preferentially use certain synonymous codons over others. This bias reflects the relative abundance of different tRNA molecules in the cell. Using rare codons can slow down translation, cause ribosome stalling, or lead to premature termination of protein synthesis.

mRNA secondary structure: The DNA sequence determines the mRNA sequence, and certain sequences can form stable secondary structures (hairpins, loops) that interfere with ribosome binding, translation initiation, or elongation. Strong secondary structures near the start codon can prevent ribosomes from accessing the mRNA.

Splicing signals and regulatory elements: Some codon choices might inadvertently create cryptic splice sites, leading to incorrect mRNA processing. Additionally, certain sequences might create or destroy regulatory elements that affect mRNA stability, localization, or translation efficiency.

GC content: Sequences with extreme GC or AT content can cause problems. Very high GC content creates stable secondary structures and can be difficult to amplify or sequence. Very low GC content can make mRNA unstable.

Translation rate effects on folding: The rate at which a protein is translated can affect its proper folding. Synonymous codon substitutions that change translation kinetics can result in proteins with the same amino acid sequence but different three-dimensional structures, affecting function.

Repeated sequences: Some codon choices might create long stretches of repeated sequences that are difficult to synthesize chemically or prone to recombination or deletion in cells.

Dr. LeProust’s Questions

Question 1: What’s the most commonly used method for oligo synthesis currently?

The most commonly used method for oligonucleotide synthesis currently is the phosphoramidite method using solid-phase synthesis. This technique, pioneered by Marvin Caruthers in the early 1980s, has been the gold standard for over 35 years. The phosphoramidite method involves the stepwise addition of protected nucleoside phosphoramidite building blocks to a growing oligonucleotide chain that is attached to a solid support (typically controlled pore glass or CPG).

The synthesis cycle consists of four main steps: detritylation (removal of the 5’-DMT protecting group with acid), coupling (addition of the next nucleoside phosphoramidite activated by tetrazole), capping (blocking any unreacted 5’-hydroxyl groups with acetic anhydride to prevent deletion mutations), and oxidation (converting the unstable phosphite triester to a stable phosphate triester using iodine). This cycle is repeated for each nucleotide to be added, building the oligonucleotide in the 3’ to 5’ direction. The method’s high coupling efficiency (typically greater than 99% per cycle), automation compatibility, and reliability have made it the dominant technique for producing oligonucleotides for research, diagnostics, and therapeutics.

Question 2: Why is it difficult to make oligos longer than 200nt via direct synthesis?

Making oligonucleotides longer than 200 nucleotides via direct phosphoramidite synthesis is difficult due to the cumulative effect of incomplete coupling reactions. Even though the coupling efficiency for each cycle is very high (typically 98-99.5%), these small inefficiencies accumulate exponentially over many cycles.

For example, with a coupling efficiency of 99% per step, after 20 cycles you would have 0.9920 = 82% full-length product. But after 100 cycles, you would have 0.99100 = 37% full-length product, and after 200 cycles, only 0.99^200 = 13% full-length product. This means that as oligonucleotides get longer, an increasingly large fraction of the final product consists of failure sequences (deletion mutants) that are missing one or more nucleotides, making purification of the full-length product extremely difficult.

Additionally, several chemical problems become more pronounced with increasing length. Depurination (loss of purine bases) occurs slowly during the acidic detritylation step and accumulates over many cycles. Side reactions, such as the formation of n+1 products (especially GG dimers from premature detritylation of dG), become more significant. The longer the synthesis, the more time the growing oligonucleotide spends on the column exposed to reagents, increasing the likelihood of damage. Furthermore, moisture contamination becomes more problematic in long syntheses because water can react with activated phosphoramidites, reducing coupling efficiency, and the synthesis equipment must maintain truly anhydrous conditions for extended periods.

Finally, from a practical standpoint, longer syntheses require more reagent consumption, more time, and result in lower yields, making them economically unfeasible for routine production. These cumulative chemical and practical limitations create a ceiling of approximately 200-300 nucleotides for standard phosphoramidite chemistry.

Question 3: Why can’t you make a 2000bp gene via direct oligo synthesis?

You cannot make a 2000 base pair gene via direct oligonucleotide synthesis using phosphoramidite chemistry because this length far exceeds the practical and theoretical limits of the method, which caps out at approximately 200-300 nucleotides.

As explained in the previous answer, coupling efficiency decreases exponentially with length. Even with optimistic assumptions of 99% coupling efficiency, attempting 2000 cycles would yield 0.99^2000 = essentially 0% full-length product. The overwhelming majority of synthesis attempts would terminate prematurely, creating a complex mixture of deletion sequences from which it would be impossible to purify the correct full-length product.

Beyond the mathematical impossibility, the chemical stability limits of the growing oligonucleotide chain and the synthetic reagents would be exceeded. The accumulated damage from hundreds of cycles of acidic treatment during detritylation, oxidative stress, and exposure to various reagents would destroy the oligonucleotide long before reaching 2000 bases. The solid support itself might begin to degrade, and the time required would make the synthesis economically unviable.

Therefore, to construct genes of this length, researchers use gene assembly methods. They synthesize multiple shorter oligonucleotides (typically 40-200 nucleotides each) that have overlapping sequences, then use enzymatic methods to join them together. Common assembly techniques include Gibson Assembly (which uses exonucleases, polymerase, and ligase to join overlapping fragments), Golden Gate Assembly (which uses Type IIS restriction enzymes for scarless assembly), and polymerase cycling assembly (PCA). These methods can reliably construct genes of several thousand base pairs by “stitching together” shorter synthetic oligonucleotides, circumventing the length limitations of direct chemical synthesis.

Professor Church’s Question (Choose ONE)

Question 1 (Selected): What are the 10 essential amino acids in all animals and how does this affect your view of the “Lysine Contingency”?

The 10 essential amino acids that animals cannot synthesize and must obtain from their diet are: histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, valine, and arginine (though arginine is semi-essential in some species and life stages). These amino acids are considered “essential” because animals lack the metabolic pathways to synthesize them de novo from other precursors.

Understanding this biochemistry fundamentally undermines the “Lysine Contingency” from Jurassic Park. In the film and novel, InGen scientists claimed they engineered the dinosaurs to be unable to produce lysine, forcing them to depend on dietary supplements and thereby preventing them from surviving outside the park. However, this premise is scientifically flawed because no animal can produce lysine naturally—it is an essential amino acid across all animal taxa, including dinosaurs.

The “contingency” would only work if dinosaurs (unlike all other known animals) had possessed the biological machinery to synthesize lysine, and the scientists then knocked out that ability through genetic engineering. Since animals lack the lysine biosynthesis pathway found in plants and microorganisms, the genetic modification described in Jurassic Park is meaningless. The dinosaurs would have been just as dependent on dietary lysine as any other animal, whether genetically modified or not.

In reality, the dinosaurs could easily obtain sufficient lysine from their diet. Herbivorous dinosaurs could get lysine from plants (especially legumes and high-protein plant matter), and carnivorous dinosaurs would obtain abundant lysine by eating other animals, since animal tissues are rich in all essential amino acids. The lysine content of prey animals would fully satisfy the dietary needs of predatory dinosaurs.

If InGen truly wanted to create a metabolic dependency, they would have needed to engineer a requirement for a non-natural nutrient that does not exist in the environment, or create a dependency on a synthetic supplement that could be tightly controlled. For example, they could have knocked out the ability to synthesize a normally non-essential amino acid and ensured that amino acid was not available in sufficient quantities in the island’s ecosystem. Alternatively, they could have engineered a requirement for an entirely artificial molecule not found in nature.

The scientific error in the “Lysine Contingency” actually serves as an important reminder for biosecurity: when designing biological containment strategies, we must have a deep understanding of the basic biochemistry and ecology involved. Superficial or flawed assumptions about metabolism can lead to containment strategies that fail in practice, with potentially dangerous consequences.


AI Assistance Citation

I used AI assistant to help develop and structure my responses for this assignment. Specifically, I used AI assistance for:

  • Brainstorming my bioengineering application (dengue diagnostic)
  • Structuring the governance framework and developing the three governance actions
  • Researching the Week 2 lecture preparation questions about DNA polymerase error rates, codon usage, and oligonucleotide synthesis
  • Organizing and formatting the complete homework in markdown

The ideas, decisions, and final direction of the work represent my own thinking and choices, with AI serving as a research and organizational tool.