Question 1: I propose a digital, governance-aware health data platform designed to support a population-level understanding of cancer and tumor prevalence in Iraq. At present, most medical records in Iraq are paper-based and fragmented across hospitals or retained by patients, making them vulnerable to loss and preventing the creation of a reliable national picture of cancer types, trends, and possible contributing factors. As a result, medical research, evidence-based policymaking, and long-term public health planning are severely limited. This proposed tool would not collect full patient records, enable diagnosis, or identify individuals. Instead, it would focus on aggregated, de-identified clinical and contextual data that can be used to understand broader cancer patterns while respecting patient privacy, consent, and cultural sensitivities. The primary goal of this platform is to address a critical infrastructure gap in Iraq’s health system by enabling ethical research and informed decision-making, while explicitly avoiding surveillance, stigmatization, or misuse of sensitive medical information. While neurological and psychological conditions represent equally serious challenges in Iraq, they are intentionally excluded from the initial scope of this design due to heightened ethical, privacy, and stigma-related risks.
Homework Questions from Professor Jacobson: According to the Lecture 2 slides, the intrinsic error rate of biological DNA polymerase is approximately 1 error per 10⁶ base pairs. The slides also indicate that the human genome is approximately 3.2 × 10⁹ base pairs in length. At this error rate, replication of the human genome would result in thousands of errors per replication cycle if no additional correction mechanisms existed. The slides explain that biology addresses this discrepancy through error-correcting mechanisms, including proofreading activity associated with DNA polymerase and post-replication mismatch repair systems, such as the MutS pathway. Together, these mechanisms reduce the effective mutation rate and allow large genomes to be stably maintained.
Post-Lab Questions
Find and describe a published paper that utilizes the Opentrons or an automation tool to achieve novel biological applications. Article: “Automation of protein crystallization scaleup via Opentrons-2 liquid handling”
Jacob B. DeRoo, Alec A. Jones, Caroline K. Slaughter, Tim W. Ahr, Sam M. Stroup, Grace B. Thompson, Christopher D. Snow, SLAS Technology, Volume 32, 2025, 100268, ISSN 2472-6303,
https://doi.org/10.1016/j.slast.2025.100268
General overview: Protein crystallization is a complex and time-consuming process that is essential for determining protein structures in structural biology. Producing well-formed protein crystals requires careful optimization of multiple conditions, including protein concentration, precipitant composition, and mixing accuracy. Because these parameters cannot be predicted in advance, crystallization is largely a trial-and-error process that demands repeated setup of crystallization plates. Traditionally, this process is performed manually, making it labor-intensive and susceptible to human error and variability. In addition, viscous protein solutions are difficult to handle consistently, which further complicates crystallization experiments.
Part A: Conceptual Questions For an average amino acid, the molecular weight is about 100 Daltons, which is equivalent to 100 g/mol. If I assume meat is about 20% protein, then 500 g of meat contains roughly 100 g of protein. The relationship is: number of moles = mass (g) / molar mass (g/mol) So, 100 g ÷ 100 g/mol ≈ 1 mole of amino acids. One mole corresponds to approximately 6×1023molecules. Therefore, consuming 500 g of meat corresponds to on the order of 1023amino acid molecules.
Image source: Genetic Lifehacks. https://www.geneticlifehacks.com/sod1-gene-polymorphisms/
Part 1: SOD1 Binder Peptide Design “MATKAVCVLKGDGPVQGIINFEQKESNGPVKVWGSIKGLTEGLHGFHVHEFGDNTAGCTSAGPHFNPLSRKHGGPKDEERHVGDLGNVTADKDGVADVSIEDSVISLSGDHCIIGRTLVVHEKADDLGKGGNEESTKTGNAGSRLACGVIGIAQ”
This is the SOD1 original sequence
The mutation A4V means:
Alanine (A) at position 4 → Valine (V)
But UniProt includes the starting M, so the alanine that changes is the 5th residue in the sequence.
1 M 2 A 3 T 4 K 5 A ← this is the one that changes 6 V We change from this MATKAV to this MATKVV
DNA Assembly Phusion High-Fidelity PCR Master Mix contains DNA polymerase, dNTPs, buffer, and Mg²⁺. The polymerase has proofreading activity (3′→5′ exonuclease), which reduces errors during DNA amplification. The dNTPs act as building blocks, while the buffer and Mg²⁺ maintain optimal conditions for enzyme activity. This is important in cloning because even small mutations can affect protein function. https://www.neb.com/en/products/e0553-phusion-high-fidelity-pcr-kit?srsltid=AfmBOooI-JWWTJ01XuZL3foWSnvq5kqQol7r8q61xRo95a6S7amAGeiH
I propose a digital, governance-aware health data platform designed to support a population-level understanding of cancer and tumor prevalence in Iraq.
At present, most medical records in Iraq are paper-based and fragmented across hospitals or retained by patients, making them vulnerable to loss and preventing the creation of a reliable national picture of cancer types, trends, and possible contributing factors. As a result, medical research, evidence-based policymaking, and long-term public health planning are severely limited.
This proposed tool would not collect full patient records, enable diagnosis, or identify individuals. Instead, it would focus on aggregated, de-identified clinical and contextual data that can be used to understand broader cancer patterns while respecting patient privacy, consent, and cultural sensitivities.
The primary goal of this platform is to address a critical infrastructure gap in Iraq’s health system by enabling ethical research and informed decision-making, while explicitly avoiding surveillance, stigmatization, or misuse of sensitive medical information.
While neurological and psychological conditions represent equally serious challenges in Iraq, they are intentionally excluded from the initial scope of this design due to heightened ethical, privacy, and stigma-related risks.
Question 2:
Goal 1: Protect patient dignity, privacy, and trust
The primary governance goal of this project is to protect patient dignity and privacy in a context where cancer remains highly stigmatized and medical ethics are inconsistently enforced. Given the absence of robust digital infrastructure and uneven adherence to confidentiality standards, there is a significant risk that sensitive health information could be misused, disclosed without consent, or lead to social harm.
This goal emphasizes minimizing data collection, enforcing de-identification by design, and ensuring that patients and communities can trust that participation will not expose them to discrimination, blame, or loss of dignity.
Goal 2: Enable ethical, feasible research under limited resources
A second key goal is to enable responsible cancer research in Iraq without creating governance barriers that make research impossible in practice. While strong safeguards are necessary, overly restrictive rules, lack of funding, limited governmental support, and dependence on expensive foreign technologies could unintentionally suppress research and innovation.
This goal therefore prioritizes governance structures that are realistic for a low-resource setting, support researcher autonomy within ethical boundaries, and allow gradual capacity building rather than imposing idealized systems that cannot be sustained locally.
Question 3:
Governance Action 1: Privacy-by-design transition from paper to aggregated digital reporting
(Technical + institutional action | Led by academic researchers and hospitals)
Purpose: Currently, cancer-related data in Iraq is largely paper-based, fragmented, and vulnerable to loss or unauthorized access. This governance action proposes a transition from individual paper records to aggregated, de-identified digital reporting, enabling population-level insight while minimizing privacy risks.
Design
• Hospitals and clinics report summary-level cancer data (e.g., tumor type, age range, region, and high-level risk factors where available).
• No personal identifiers such as names, national IDs, or addresses are collected.
• Data entry tools are designed to be low-cost, simple, and compatible with limited digital infrastructure.
• Training emphasizes what data should not be collected, reinforcing privacy-by-design principles.
Assumptions
• Aggregated data is sufficient to identify national cancer trends.
• Healthcare staff can be trained to follow simplified digital reporting protocols.
Risks of Failure & “Success”
• Failure risk: Limited technical capacity or staff resistance could result in incomplete or inconsistent data reporting.
• Success risk: Even aggregated data could be misinterpreted or misused if governance oversight is weak.
Governance Action 2: Strengthened ethics enforcement and hospital-level conduct standards
(Institutional rule | Led by hospitals, universities, and health authorities)
Purpose: Although ethical guidelines exist, they are not consistently enforced. This action aims to strengthen ethical conduct within hospitals, particularly around patient privacy, infection control, and respect for patient dignity.
Design
• Establish clear, enforceable standards for:
o patient confidentiality
o limits on hospital visitors for immuno-compromised cancer patients
o basic sterilization and infection-control practices
• Ethics training is integrated into routine hospital operations rather than optional workshops.
• Accountability mechanisms focus on institutional responsibility rather than individual blame.
Assumptions
• Institutional enforcement is more effective than relying solely on individual compliance.
• Hospitals have the authority to implement and monitor conduct standards.
Risks of Failure & “Success”
• Failure risk: Standards may exist only on paper without consistent enforcement.
• Success risk: Strict enforcement could be perceived as culturally insensitive if not accompanied by clear communication.
Governance Action 3: Community-centered cancer education and engagement strategy
(Social and educational action | Led by hospitals, NGOs, and public health educators)
Purpose: In many rural and underserved areas, cancer and tumors are misunderstood, sometimes viewed as contagious or caused by moral failure. This action treats education as a governance tool to reduce stigma, misinformation, and resistance to ethical data sharing.
Design
• Community education initiatives explaining:
o what cancer and tumors are and are not
o common risk factors (genetic factors, environmental pollution, viral causes, lifestyle and diet)
o The importance of limiting hospital visits to protect patient immunity
• Education delivered by trusted local healthcare workers and community figures.
• No requirement for digital literacy or individual data submission.
Assumptions
• Trust in local messengers increases cooperation and understanding.
• Education can reduce stigma and harmful practices.
Risks of Failure & “Success”
• Failure risk: Misinformation may spread faster than educational efforts.
• Success risk: Communities may expect direct medical treatment or financial support beyond the scope of the project.
Question 4:
Option 1: Privacy-by-design aggregated digital reporting
Option 2: Ethics enforcement and hospital-level conduct standards
Option 3: Community-centered cancer education and engagement
Does the option:
Option 1
Option 2
Option 3
Enhance Biosecurity
• By preventing incidents
1
2
n/a
• By helping respond
1
2
3
Foster Lab Safety
• By preventing incident
2
1
n/a
• By helping respond
2
1
n/a
Protect the environment
• By preventing incidents
n/a
n/a
n/a
• By helping respond
n/a
n/a
n/a
Other considerations
• Minimizing costs and burdens to stakeholders
2
3
1
• Feasibility?
2
3
1
• Not impede research
1
2
1
• Promote constructive applications
1
2
1
Question 5:
I prioritize a sequenced combination of governance actions, rather than treating all options as equally urgent.
Governance Action 1 (privacy-by-design aggregated digital reporting) is the primary priority. It directly addresses the central problem identified in this proposal—the absence of reliable, population-level cancer data due to fragmented, paper-based records—while also providing the strongest protection for patient privacy and dignity. Without this ethical data foundation, other governance efforts would lack a practical and legitimate basis.
Governance Action 3 (community-centered cancer education and engagement) is prioritized as a supporting and parallel action. Public understanding and trust are essential for any data-related initiative to be ethically acceptable and practically feasible in the Iraqi context, particularly in rural and underserved communities. Education helps reduce stigma, misinformation, and harmful practices, and supports voluntary participation without coercion.
Governance Action 2 (ethics enforcement and hospital-level conduct standards) is recognized as critically important but is treated as a longer-term priority. Although it has strong potential to improve patient safety and ethical compliance, it depends on sustained institutional capacity, enforcement mechanisms, and governmental support, which remain uncertain in the short term.
This prioritization reflects key trade-offs between ethical protection, feasibility under limited resources, and institutional readiness. It also acknowledges uncertainty regarding long-term enforcement and funding. These recommendations are intended for local hospitals, universities, and public health institutions in Iraq, as well as international academic and public-health collaborators supporting ethical research and capacity building.
Reflecting on this week’s class and assignment, one thing that really stood out to me was how easily health data initiatives can cause harm—even when the intentions are good—if governance is treated as something secondary rather than built in from the start. Before this week, I mostly thought of ethical risk as something tied to deliberate misuse. Working through this assignment made it clear to me that harm can also come from structural issues, such as paper-based systems, unclear responsibility, and weak enforcement, even when no one is acting maliciously.
I was also struck by the tension between protecting patient dignity and making research actually possible in low-resource settings. While strong safeguards are clearly necessary, overly idealized governance frameworks can unintentionally exclude researchers and communities that don’t have the funding or infrastructure to meet those standards. This reinforced for me the importance of privacy-by-design approaches, community engagement, and realistic, step-by-step governance strategies that build trust over time, rather than relying only on top-down rules.
AI Use Statement:
I used ChatGPT as a support tool to help clarify assignment instructions, organize my thinking, and refine the wording of my responses. All ideas, decisions, and final interpretations reflect my own understanding.
Week 2 HW: DNA read, write, and edit
Homework Questions from Professor Jacobson:
According to the Lecture 2 slides, the intrinsic error rate of biological DNA polymerase is approximately 1 error per 10⁶ base pairs. The slides also indicate that the human genome is approximately 3.2 × 10⁹ base pairs in length. At this error rate, replication of the human genome would result in thousands of errors per replication cycle if no additional correction mechanisms existed.
The slides explain that biology addresses this discrepancy through error-correcting mechanisms, including proofreading activity associated with DNA polymerase and post-replication mismatch repair systems, such as the MutS pathway. Together, these mechanisms reduce the effective mutation rate and allow large genomes to be stably maintained.
The Lecture 2 slides indicate that an average human protein is approximately 1036 base pairs in length. Because DNA consists of four possible nucleotides, the total number of possible nucleotide sequences of this length is 4¹⁰³⁶, which follows directly from basic combinatorics (four choices at each position).
The slides further show that, in practice, only a small subset of these sequences are usable. Constraints illustrated in the slides include GC content effects, secondary structure formation, and sequence-dependent synthesis limitations, all of which can interfere with DNA synthesis, transcription, or downstream use. As a result, most theoretically possible sequences are not viable in biological or synthetic contexts.
Homework Questions from Dr. LeProust:
According to the Lecture 2 slides, the most commonly used method for oligonucleotide synthesis is solid-phase phosphoramidite chemical synthesis. The slides describe this as a stepwise process in which nucleotides are added sequentially to a growing DNA strand attached to a solid support through repeated chemical cycles.
The historical overview in the slides also notes that the phosphoramidite method, developed by Caruthers (1981), remains the foundation of modern DNA synthesis technologies.
The Lecture 2 slides explain that direct oligonucleotide synthesis becomes difficult beyond approximately 200 nucleotides because errors accumulate with each synthesis cycle. Each nucleotide addition has a finite probability of failure, and as the number of synthesis steps increases, the yield of full-length, correct oligos decreases sharply.
The slides also show that longer oligos suffer from truncation products, base incorporation errors, and sequence-dependent effects, including high GC content and secondary structure formation. These factors reduce both yield and purity, making long oligos impractical to synthesize reliably in a single continuous process.
As shown in the Lecture 2 slides, a 2000 base-pair gene cannot be synthesized directly because the cumulative error rate of chemical synthesis over thousands of nucleotide additions would result in an extremely low fraction of correct, full-length molecules.
Instead, the slides describe classical gene synthesis, in which long genes are assembled from many shorter oligos using enzymatic methods such as PCR-based assembly and ligation. This hierarchical approach allows errors to be managed and corrected during assembly, making long gene construction feasible.
Homework Question from George Church:
Animals require a conserved set of essential amino acids that must be obtained through diet. These include histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, valine, and arginine.
Prof. Church’s Lecture 2 slide #4 highlights the genetic code as a fixed mapping between DNA codons and amino acids. In this context, the Lysine Contingency reflects a fundamental biological constraint: lysine is essential and encoded by the genetic code, yet animals cannot synthesize it. This suggests that biological systems are historically and chemically constrained, and that changing or replacing core amino acids such as lysine would require large-scale re-engineering of both metabolism and the genetic code.
References & Use of Tools:
The primary sources for all homework answers are the:
Jacobson, J.
HTGAA Lecture 2: Gene Synthesis (MIT, 2026).
Used for questions on DNA polymerase error rates, genome scale, protein length estimates, combinatorics of DNA sequences, GC content effects, secondary structure, and biological error correction.
LeProust, E.
HTGAA Lecture 2: Oligonucleotide and Gene Synthesis (MIT, 2026).
Used for questions on phosphoramidite oligonucleotide synthesis, limitations of long oligo synthesis, error accumulation, and classical gene assembly strategies.
Church, G.
HTGAA Lecture 2: Reading & Writing Life (MIT, 2026), Slide #4.
Used for conceptual framing of the genetic code (DNA → mRNA → amino acids) and interpretation of the “Lysine Contingency.”
All interpretations derived from lecture material are explicitly tied to the concepts presented in these slides.
(Used as background confirmation that lysine is classified as an essential amino acid.)
No external references were used for the Jacobson or LeProust questions beyond the lecture slides.
An AI-based writing tool (ChatGPT) was used solely to assist with wording, organization, and clarity. All factual content was derived from the cited lectures and explicitly listed external sources.
Part 0: Basics of Gel Electrophoresis
Gel electrophoresis is a technique used to separate DNA fragments based on their size by applying an electric field. I have performed gel electrophoresis multiple times during molecular biology laboratory work in Iraq, as well as during my internship at DGIST University in South Korea in the Molecular Neuroscience Department. Through these experiences, I became familiar with the practical workflow, while the HTGAA lectures helped reinforce the underlying concepts.
In this technique, DNA migrates through an agarose gel toward the positive electrode because DNA is negatively charged. The gel is prepared using agarose, poured into a casting tray, and fitted with a comb to form wells. DNA samples are mixed with a loading dye and carefully loaded into the wells along with a DNA ladder for size reference. After applying an electric current, DNA fragments separate within the gel and are visualized using a gel imaging system to capture and document the results.
In my previous lab work, gel electrophoresis was mainly used for genotyping and for verifying DNA samples prior to sequencing, helping confirm fragment size and sample quality before downstream analysis. Revisiting this technique in the context of HTGAA helped me better articulate how gel electrophoresis fits into broader molecular biology and sequencing workflows.
In many workflows, DNA samples are PCR-amplified prior to gel electrophoresis to ensure sufficient DNA quantity and to analyze specific target fragments, particularly in genotyping and sequencing preparation.
Part 1: Benchling & In-silico Gel Art
Figure 1:
Figure 2:
Part 3: DNA Design Challenge
3.1. Choose your protein
Amyloid Beta (Aβ), specifically the Aβ(1–42) peptide derived from the human amyloid precursor protein (APP).
I chose Amyloid-β because I want to understand how a normal peptide becomes harmful when it misfolds and aggregates, and how that process can disrupt brain function, memory, and overall body function in Alzheimer’s disease. Alzheimer’s affects many older adults and can gradually remove their ability to access their memories and daily independence, so I’m personally motivated to understand the molecular pathway that leads to these changes. In addition, Aβ is strongly linked to the classic Alzheimer’s pathology of extracellular plaques, which makes it a clear and widely studied starting point for connecting sequence → structure → disease mechanism.
Amyloid-beta is not encoded as an independent gene. Instead, it is generated by proteolytic cleavage of the amyloid-beta precursor protein (APP). Therefore, the APP protein sequence was used as the source sequence, with specific focus on the region that produces the Aβ peptide.
I used an online reverse translation tool (bioinformatics.org SMS2) to convert the amino acid sequence of my chosen protein (human APP; UniProt P05067) into a corresponding DNA coding sequence. Reverse translation is not unique because the genetic code is degenerate, meaning multiple codons can encode the same amino acid. Therefore, the sequence below represents one valid nucleotide sequence that could encode the same protein.
Reverse-translated DNA sequence (coding DNA; A/T/G/C only):
reverse translation of sp|P05067|A4_HUMAN Amyloid-beta precursor protein OS=Homo sapiens OX=9606 GN=APP PE=1 SV=3 to a 2310 base sequence of most likely codons.
atgctgccgggcctggcgctgctgctgctggcggcgtggaccgcgcgcgcgctggaagtg
ccgaccgatggcaacgcgggcctgctggcggaaccgcagattgcgatgttttgcggccgc
ctgaacatgcatatgaacgtgcagaacggcaaatgggatagcgatccgagcggcaccaaa
acctgcattgataccaaagaaggcattctgcagtattgccaggaagtgtatccggaactg
cagattaccaacgtggtggaagcgaaccagccggtgaccattcagaactggtgcaaacgc
ggccgcaaacagtgcaaaacccatccgcattttgtgattccgtatcgctgcctggtgggc
gaatttgtgagcgatgcgctgctggtgccggataaatgcaaatttctgcatcaggaacgc
atggatgtgtgcgaaacccatctgcattggcataccgtggcgaaagaaacctgcagcgaa
aaaagcaccaacctgcatgattatggcatgctgctgccgtgcggcattgataaatttcgc
ggcgtggaatttgtgtgctgcccgctggcggaagaaagcgataacgtggatagcgcggat
gcggaagaagatgatagcgatgtgtggtggggcggcgcggataccgattatgcggatggc
agcgaagataaagtggtggaagtggcggaagaagaagaagtggcggaagtggaagaagaa
gaagcggatgatgatgaagatgatgaagatggcgatgaagtggaagaagaagcggaagaa
ccgtatgaagaagcgaccgaacgcaccaccagcattgcgaccaccaccaccaccaccacc
gaaagcgtggaagaagtggtgcgcgaagtgtgcagcgaacaggcggaaaccggcccgtgc
cgcgcgatgattagccgctggtattttgatgtgaccgaaggcaaatgcgcgccgtttttt
tatggcggctgcggcggcaaccgcaacaactttgataccgaagaatattgcatggcggtg
tgcggcagcgcgatgagccagagcctgctgaaaaccacccaggaaccgctggcgcgcgat
ccggtgaaactgccgaccaccgcggcgagcaccccggatgcggtggataaatatctggaa
accccgggcgatgaaaacgaacatgcgcattttcagaaagcgaaagaacgcctggaagcg
aaacatcgcgaacgcatgagccaggtgatgcgcgaatgggaagaagcggaacgccaggcg
aaaaacctgccgaaagcggataaaaaagcggtgattcagcattttcaggaaaaagtggaa
agcctggaacaggaagcggcgaacgaacgccagcagctggtggaaacccatatggcgcgc
gtggaagcgatgctgaacgatcgccgccgcctggcgctggaaaactatattaccgcgctg
caggcggtgccgccgcgcccgcgccatgtgtttaacatgctgaaaaaatatgtgcgcgcg
gaacagaaagatcgccagcataccctgaaacattttgaacatgtgcgcatggtggatccg
aaaaaagcggcgcagattcgcagccaggtgatgacccatctgcgcgtgatttatgaacgc
atgaaccagagcctgagcctgctgtataacgtgccggcggtggcggaagaaattcaggat
gaagtggatgaactgctgcagaaagaacagaactatagcgatgatgtgctggcgaacatg
attagcgaaccgcgcattagctatggcaacgatgcgctgatgccgagcctgaccgaaacc
aaaaccaccgtggaactgctgccggtgaacggcgaatttagcctggatgatctgcagccg
tggcatagctttggcgcggatagcgtgccggcgaacaccgaaaacgaagtggaaccggtg
gatgcgcgcccggcggcggatcgcggcctgaccacccgcccgggcagcggcctgaccaac
attaaaaccgaagaaattagcgaagtgaaaatggatgcggaatttcgccatgatagcggc
tatgaagtgcatcatcagaaactggtgttttttgcggaagatgtgggcagcaacaaaggc
gcgattattggcctgatggtgggcggcgtggtgattgcgaccgtgattgtgattaccctg
gtgatgctgaaaaaaaaacagtataccagcattcatcatggcgtggtggaagtggatgcg
gcggtgaccccggaagaacgccatctgagcaaaatgcagcagaacggctatgaaaacccg
acctataaattttttgaacagatgcagaac
3.3 Codon optimization
To optimize the codon usage of the nucleotide sequence obtained in the previous step, I used the VectorBuilder online codon optimization tool. The reverse-translated DNA sequence was entered into the tool and Mus musculus (mouse) was selected as the target organism.
Codon optimization is necessary because, although the genetic code is universal, different organisms preferentially use specific codons due to differences in tRNA abundance and translation efficiency. If a gene contains codons that are rarely used in the host organism, translation can be inefficient and protein expression levels may be reduced. Codon optimization replaces rare codons with synonymous codons that are more frequently used by the host, while preserving the amino acid sequence of the protein.
After optimization, improvements were observed in sequence quality metrics. The GC content increased slightly from 55.37% to 57.62%, remaining within an optimal range for stability and transcription. In addition, the Codon Adaptation Index (CAI) increased from 0.74 to 0.92, indicating a substantially improved match between the codon usage of the sequence and the translational machinery of the mouse host. These changes suggest a higher likelihood of efficient translation and protein expression in mouse-based experimental systems.
Mouse was chosen as the target organism because Alzheimer’s disease research is commonly conducted using mouse models, including APP-related transgenic and knock-in models. Optimizing the codon usage for mouse therefore increases the biological relevance of the designed sequence.
Codon-optimized DNA sequence:
Improved DNA[1]: GC=57.62%, CAI=0.92
ATGCTGCCAGGCCTGGCCCTGCTGCTGCTCGCCGCCTGGACAGCCCGGGCCCTGGAAGTGCCAACCGACGGCAACGCTGGACTGCTGGCTGAGCCTCAGATCGCCATGTTTTGTGGGCGGCTGAATATGCACATGAATGTGCAGAACGGAAAGTGGGACTCTGACCCCTCCGGCACCAAAACCTGTATCGATACAAAGGAAGGCATTCTGCAGTACTGTCAGGAGGTGTATCCCGAGCTGCAGATCACCAACGTGGTGGAGGCCAACCAGCCTGTGACCATCCAAAATTGGTGCAAAAGGGGTAGAAAGCAGTGTAAGACACACCCACACTTTGTGATCCCATATAGATGTCTGGTGGGGGAGTTCGTGTCCGACGCCCTGCTGGTGCCCGACAAGTGCAAGTTTCTGCACCAGGAGAGAATGGACGTGTGCGAGACACACCTGCACTGGCACACAGTGGCTAAGGAGACCTGTAGTGAGAAGAGCACCAACCTGCACGACTACGGGATGCTGCTGCCCTGCGGTATCGACAAGTTTAGAGGTGTGGAATTCGTGTGCTGTCCTCTGGCCGAGGAGTCCGACAATGTGGATAGCGCCGACGCCGAGGAGGACGACAGCGACGTGTGGTGGGGCGGCGCCGATACAGACTACGCCGATGGCTCCGAAGACAAGGTGGTGGAGGTGGCCGAGGAAGAGGAAGTGGCCGAGGTGGAGGAGGAGGAGGCTGACGACGACGAGGACGATGAGGACGGCGATGAGGTTGAGGAGGAGGCCGAGGAGCCTTACGAGGAAGCCACCGAGCGGACTACTTCCATTGCTACCACCACCACCACCACTACCGAGAGCGTGGAGGAGGTGGTGAGAGAGGTGTGCAGCGAGCAGGCCGAGACCGGCCCTTGTAGAGCCATGATCTCCCGGTGGTATTTCGATGTGACCGAGGGAAAGTGCGCCCCTTTCTTCTACGGAGGCTGTGGAGGCAACAGGAACAATTTTGACACTGAGGAGTACTGTATGGCCGTGTGTGGCTCCGCCATGAGCCAGTCCCTGCTGAAGACCACTCAGGAGCCCCTGGCACGGGACCCTGTGAAGCTGCCCACCACCGCCGCTAGCACACCCGACGCCGTGGACAAGTATTTGGAGACCCCAGGAGACGAGAATGAGCACGCACACTTTCAGAAGGCTAAGGAGCGCCTGGAGGCTAAGCACCGAGAAAGGATGTCTCAGGTGATGCGCGAGTGGGAGGAAGCCGAGAGGCAGGCTAAGAACCTGCCTAAAGCTGACAAAAAAGCCGTGATCCAGCATTTCCAGGAGAAGGTGGAGAGCCTGGAACAGGAGGCTGCCAACGAGAGACAGCAGCTGGTGGAGACTCACATGGCTCGAGTGGAGGCCATGCTGAACGACAGGAGGAGGCTGGCCCTGGAGAACTACATCACCGCTCTGCAGGCCGTGCCTCCCAGGCCAAGGCATGTGTTTAACATGCTGAAGAAGTACGTGAGGGCAGAACAGAAGGACCGGCAACACACCCTGAAACACTTCGAGCACGTTAGAATGGTGGATCCTAAGAAAGCCGCTCAGATTAGAAGCCAGGTGATGACCCACCTGAGAGTGATTTACGAGAGAATGAACCAAAGCCTGTCTCTGCTGTATAATGTGCCCGCCGTCGCCGAGGAGATCCAGGACGAGGTGGACGAACTGCTGCAGAAGGAGCAAAATTACTCAGATGACGTGCTGGCAAACATGATCAGCGAACCACGCATCTCCTACGGCAACGACGCCCTGATGCCTTCCCTGACCGAAACTAAGACCACTGTGGAGCTGCTCCCAGTGAACGGCGAATTCTCCCTCGACGACCTGCAGCCTTGGCACAGCTTCGGGGCCGACTCCGTGCCTGCAAACACTGAAAACGAGGTGGAGCCTGTGGACGCAAGACCTGCCGCCGATAGAGGACTGACAACAAGACCTGGCAGCGGACTGACCAACATCAAGACCGAGGAGATTAGTGAGGTGAAGATGGATGCCGAGTTCAGGCACGATAGCGGGTACGAGGTACACCACCAGAAGCTGGTGTTCTTCGCTGAGGATGTGGGCAGCAATAAAGGAGCCATTATCGGCCTGATGGTGGGAGGGGTGGTGATCGCCACAGTGATCGTTATCACCCTGGTGATGCTGAAGAAGAAGCAGTACACCTCCATTCACCATGGGGTCGTCGAAGTGGATGCCGCCGTGACTCCAGAGGAGAGACACCTGAGCAAGATGCAGCAGAACGGGTATGAGAACCCAACCTATAAGTTCTTCGAGCAGATGCAGAAC
Once a codon-optimized DNA sequence is obtained, the protein can be produced using either cell-dependent or cell-free expression technologies.
Cell-dependent expression
In a cell-dependent system, the codon-optimized DNA sequence is first inserted into an expression vector that contains a promoter and other regulatory elements required for transcription. This vector is then introduced into host cells, such as mouse or mammalian cells. Inside the cell, the DNA sequence is transcribed into messenger RNA (mRNA) by RNA polymerase. The mRNA is subsequently translated by ribosomes, which read the nucleotide codons and use transfer RNAs (tRNAs) to assemble the corresponding amino acids into the protein. This approach allows protein production in a biologically relevant cellular environment and is commonly used in disease-related research.
Cell-free expression
Alternatively, the DNA sequence (or the corresponding mRNA) can be used in a cell-free expression system. These systems contain purified ribosomes, enzymes, and translation factors, allowing transcription and translation to occur in vitro without living cells. Cell-free expression enables rapid protein production and precise experimental control, although it may not fully replicate cellular processes such as protein trafficking or degradation.
3.5 (Optional) How does it work in nature / biological systems?
How a single gene can code for multiple proteins
In biological systems, a single gene can give rise to multiple proteins through transcriptional and post-transcriptional mechanisms. One major mechanism is alternative splicing, where different combinations of exons are joined from the same pre-mRNA to produce multiple mRNA transcripts. Additional diversity can arise from alternative transcription start sites or alternative polyadenylation. In the case of the amyloid precursor protein (APP), different processing pathways generate distinct fragments, including amyloid-beta peptides, demonstrating how one gene can produce multiple biologically relevant protein products.
DNA → RNA → Protein alignment
To visualize the flow of genetic information, the DNA sequence was translated using the ExPASy Translate tool, which displays all possible reading frames. The biologically relevant open reading frame was identified in the 5’→3’ Frame 1, which begins with a start codon (ATG) and produces a continuous amino acid sequence.
Week 3 HW: lab-automation
Post-Lab Questions
Find and describe a published paper that utilizes the Opentrons or an automation tool to achieve novel biological applications.
Article: “Automation of protein crystallization scaleup via Opentrons-2 liquid handling”
Jacob B. DeRoo, Alec A. Jones, Caroline K. Slaughter, Tim W. Ahr, Sam M. Stroup, Grace B. Thompson, Christopher D. Snow,
SLAS Technology,
Volume 32,
2025,
100268,
ISSN 2472-6303,
General overview: Protein crystallization is a complex and time-consuming process that is essential for determining protein structures in structural biology. Producing well-formed protein crystals requires careful optimization of multiple conditions, including protein concentration, precipitant composition, and mixing accuracy. Because these parameters cannot be predicted in advance, crystallization is largely a trial-and-error process that demands repeated setup of crystallization plates. Traditionally, this process is performed manually, making it labor-intensive and susceptible to human error and variability. In addition, viscous protein solutions are difficult to handle consistently, which further complicates crystallization experiments.
In this study, the authors demonstrate how an Opentrons OT-2 liquid-handling robot can be adapted to automate protein crystallization plate setup. The robot was programmed using Python scripts, allowing precise control over aspirating, dispensing, and positioning steps. The researchers used Hampton Research Cryschem 24-well plates, which are larger than standard microplates and not directly compatible with the OT-2 deck. To address this limitation, the team designed a custom 3D-printed adapter made from polylactic acid (PLA) that securely clips into two deck slots and holds the crystallization plate in place. This setup enabled accurate and reproducible preparation of sitting-drop crystallization experiments using an affordable, open-source automation platform.
Findings: The authors validated the automated workflow using multiple experimental approaches. First, food dyes (red, blue, and yellow) were dispensed into colorless water to visually confirm accurate gradient formation across the crystallization plate, showing no significant difference between automated and manual pipetting. The system was then tested using hen egg white lysozyme (HEWL), a protein known to crystallize reliably under suitable conditions. During testing, the authors identified that the GEN1 P10 pipette had difficulty consistently dispensing very small volumes (2 µL) onto the sitting-drop pedestal. To overcome this limitation, they increased the total drop volume to 4 µL, which improved consistency and reliability. Finally, the automated protocol was used to reproduce crystallization of a protein previously studied by the authors, demonstrating that the Opentrons-based workflow could successfully replicate known crystallization outcomes with reduced manual effort.
Figure 1: Crystallization results from OT-2–prepared Cryschem 24-well sitting-drop experiments.
Write a description about what you intend to do with automation tools for your final project. You may include example pseudocode, Python scripts, 3D printed holders, a plan for how to use Ginkgo Nebula, and more. You may reference this week’s recitation slide deck for lab automation details.
For my final project, I want to use lab automation tools to explore biological stress responses, with a focus on the hormone cortisol and its long-term effects on mental health. My motivation comes from the Iraqi context, where years of war, instability, environmental stress, and constant exposure to technology have contributed to high levels of anxiety, attention deficits, and stress-related disorders across the population.
According to the review article “https://pmc.ncbi.nlm.nih.gov/articles/PMC5619133/" chronic elevation of cortisol disrupts normal physiological balance and keeps the body in a prolonged fight-or-flight state. Long-term cortisol exposure affects the brain, particularly regions involved in attention, emotional regulation, and cognitive control, and is strongly associated with anxiety, impaired focus, and declining mental health. The article explains how sustained stress alters hypothalamic–pituitary–adrenal (HPA) axis regulation, leading to maladaptive stress responses rather than short-term protective ones.
In this project, I aim to simulate stress-related conditions in a controlled and automated way, rather than measuring stress directly in humans. Using automation tools such as the Opentrons liquid-handling robot, I would design workflows that represent different stress states (for example: baseline, moderate stress, chronic stress) through reproducible experimental conditions. Automation allows precise control of timing, volumes, and repetition, which is essential when modeling biological stress responses.
I would document the workflow using Python scripts or pseudocode, similar to what we learned in recitation, even if the protocol is not yet tested on the robot. Automation is critical here because stress biology depends on consistency and repetition, which manual handling cannot guarantee.
By replicating stress-associated conditions in vitro through automated workflows, this project aims to better understand how chronic stress environments such as those experienced by many Iraqi individuals may contribute to long-term cognitive and emotional effects. Understanding these mechanisms is an important step toward improved diagnosis, prevention, and treatment of stress-related disorders.
For now this is my pesudocode that will be developed more before the end of this course.
for condition in stress_conditions:
dispense_reagents(condition)
incubate_for_defined_time(condition)
prepare_samples_for_analysis()
Week 04 – Protein Design Part I
Part A: Conceptual Questions
For an average amino acid, the molecular weight is about 100 Daltons, which is equivalent to 100 g/mol. If I assume meat is about 20% protein, then 500 g of meat contains roughly 100 g of protein. The relationship is:
number of moles = mass (g) / molar mass (g/mol)
So, 100 g ÷ 100 g/mol ≈ 1 mole of amino acids. One mole corresponds to approximately 6×1023molecules. Therefore, consuming 500 g of meat corresponds to on the order of 1023amino acid molecules.
Proteins from food are first digested in the gastrointestinal tract into smaller peptides and eventually into individual amino acids. These amino acids are absorbed into the bloodstream and become part of the body’s amino acid pool. Human cells then reuse these amino acids to synthesize new proteins according to human gene expression and the cellular molecular machinery. Because each organism has its own genes and regulatory systems, the same amino acids can be assembled into completely different proteins. This is why eating fish or chicken does not make us become those organisms.The amino acids are simply recycled by our own cells to build human proteins.
According to the article from FEBS Press, the standard set of 20 amino acids was selected early in evolution because it provides a near-ideal mix of charge, size, and hydrophobicity that enables proteins to fold into soluble, stable, close-packed structures with functional binding pockets. This selection happened after the RNA World had cofactors that already performed most catalysis, so amino acids were chosen mainly to support folding and structural stability, not just chemistry. While other amino acids are chemically possible, these 20 were favored because they collectively cover the chemical properties needed for diverse protein structures.
(FEBS Press – Andrew J. Doig, 2016)
Yes, it is possible to design non-natural amino acids. All amino acids share the same backbone (amino + carboxyl), and the side chain (R group) is what varies. By changing or substituting the R group, many new amino acids can be created.
Non-natural amino acids are used in synthetic biology and drug design to explore chemistry beyond the 20 natural ones, but they are not part of the standard genetic code.
(Supported by general protein chemistry and synthetic biology literature)
The astrobiology article explains that amino acids could form abiotically through chemical reactions in the early Solar System and on early Earth. Processes like Strecker synthesis under prebiotic conditions, reactions in hydrothermal vents, and organic chemistry on planetesimals delivered by meteorites could produce amino acids before life existed. These amino acids do not require enzymes; they form through random but plausible chemical reactions in environments with water, heat, and simple carbon compounds.
In biology, proteins are made of L-amino acids, and these form right-handed α-helices because of the stereochemistry of the backbone. If the same sequence were built entirely from D-amino acids, the mirror image geometry would favor left-handed α-helices instead.
Yes. In addition to the common α-helix, proteins also contain other helical structures such as 3₁₀ helices and π-helices that are observed in real proteins. These alternative helices differ in hydrogen bonding patterns and geometry. More exotic or engineered helices could also be possible if non-natural amino acids or non-canonical backbones are used, expanding the structural possibilities.
Most protein helices are right-handed because proteins use L-amino acids almost exclusively in nature. The stereochemistry of L-amino acids puts constraints on backbone torsion angles that favor right-handed helices energetically and sterically. Left-handed helices are possible in principle but are unstable with L-amino acids due to steric clashes and unfavorable geometry.
β-sheets are secondary structures formed by extended polypeptide chains connected by backbone hydrogen bonds. The edges of β-sheets expose hydrogen-bond donors and acceptors. If these edges are not satisfied internally, they tend to pair with similar edges on other β-strands or sheets, leading to aggregation.
The main driving forces behind β-sheet aggregation are:
A. Backbone hydrogen bonding between β-strands in different molecules.
B. Hydrophobic interactions between nonpolar side chains, which reduce exposure to water.
C. Entropy gain from releasing ordered water as hydrophobic surfaces come together and as hydrogen bonds form.
This explains why β-sheet-rich structures such as amyloids and fibrils tend to form in aggregation-prone conditions.
10 and 11 are skipped.
Part B: Protein Analysis and Visualization
I chose the human glucocorticoid receptor because it is directly involved in sensing cortisol, which is one of the main stress hormones in the body. I was drawn to this protein because, in my own life and work in Iraq, I have seen how prolonged stress and anxiety can affect people deeply, especially students, through poor focus, memory problems, sleep disturbance, digestive complaints, and other health issues. This made me interested in a protein that helps the body detect and respond to stress at the molecular level. I also chose it because it connects neuroscience, hormones, and real-life health challenges in a very meaningful way.
The amino acid sequence of the human glucocorticoid receptor (NR3C1) was obtained from the UniProt database (P04150).
Figure 1: NR3C1 human glucocorticoid receptor amino acid sequence
The protein consists of 777 amino acids. Using the Google Colab amino acid counting notebook, the most frequent amino acid in the sequence is serine (S), which appears 85 times.
A UniProt BLAST search for the glucocorticoid receptor returned 250 homologous protein sequences in UniProtKB. These homologs are found across several vertebrate species, including humans, chimpanzees, bonobos, and orangutans, indicating that the protein is evolutionarily conserved.
Figure 2: Protein sequence homologs.
The glucocorticoid receptor belongs to the nuclear receptor protein family, specifically the steroid hormone receptor family, which functions as ligand-activated transcription factors that regulate gene expression.
Figure 3: Protein family
The protein was solved in 2017, and the structure was determined using X-ray diffraction. As well as the structure has a resolution of 2.12 Å, which indicates a high-quality crystal structure. The structure contains additional molecules besides the protein. These include triamcinolone acetonide, which is a glucocorticoid ligand bound in the receptor’s binding pocket, and a coregulator peptide fragment (SHP) that interacts with the receptor.
According to the SCOP structural classification database, the glucocorticoid receptor DNA-binding domain belongs to the glucocorticoid receptor-like DNA-binding domain superfamily. This superfamily adopts a GATA zinc-finger-like fold, which enables the receptor to bind specific DNA sequences and regulate gene transcription.
Figure 4: Protein Structure
The protein structure was visualized in PyMOL and RBCS using different representations, including cartoon, ribbon, and ball-and-stick models, as shown in Figures 5 and 6. The secondary structure of the protein was examined by coloring the structure according to helices and sheets, which is presented in Figure 9. In addition, the distribution of hydrophobic and hydrophilic residues across the protein is illustrated in Figures 10, 11 and 12. Finally, the surface representation of the protein was generated to observe possible cavities or binding pockets, as shown in Figure 13.
Figure 5: Cartoon representation of the protein structure using RCSB
Figure 6: Cartoon representation of the protein structure using PyMol
Figure 7: Ribbon representation of the protein structure using PyMol
Figure 8: Ball-and-stick models representation of the protein structure using PyMol
The protein does contain more alpha helices than beta pleated sheets.
When the protein surface is colored by residue type, hydrophilic residues are primarily distributed on the outer surface of the protein where they can interact with water. Hydrophobic residues appear in patches and are more commonly buried within the protein interior or located in pockets, helping stabilize the protein structure and potentially forming ligand-binding regions.
Figure 9: Secondary structure of the protein was colored using PyMol
Figure 10: Distribution of hydrophobic and hydrophilic residues across the protein
Figure 11: Distribution of hydrophobic residues across the protein
Figure 12: Distribution of hydrophilic residues across the protein
The surface representation reveals several cavities and grooves on the protein surface. One noticeable pocket appears near the center between the two domains, which could function as a ligand-binding site.
Figure 13: The surface representation of the protein
Part C: Using ML-Based Protein Design Tools
When looking at the mutation heatmap I noticed that position 58 shows a strong negative score for some substitutions. For example, mutating that residue to tryptophan (W) gives a score of about −2.62, which is one of the darkest values in the heatmap. This suggests that the original amino acid at that position is important for the protein structure and that introducing a bulky residue like tryptophan is not tolerated.
Figure 14: Mutation Scan HeatMap
In the latent space visualization, proteins that appear closer together in the t-SNE space share more similar sequence embeddings generated by the ESM model. The clustering pattern suggests that proteins with related sequence features are grouped together. The color scale corresponds to the value of the third t-SNE dimension and mainly helps visualize the structure of the embedding space.
Figure 15: Protein Folding by ESM
The structure predicted by ESMFold shows a similar overall fold compared to the experimentally determined PDB structure. Both structures contain multiple α-helices arranged in a compact configuration. However, the structures are not identical. The PDB structure represents a multimeric assembly with multiple chains, while the ESMFold model predicts a single-chain structure. Additionally, experimental structures often contain ligands or small molecules that are not predicted by ESMFold. Despite these differences, the general secondary structure arrangement appears consistent between the two models.
Figure 16: Protein Structure Predicted by ESMFold
I first introduced a single mutation by changing one amino acid in the sequence and then folded the modified sequence using ESMFold. The predicted structure remained very similar to the original structure, with the alpha-helical regions still preserved, as shown in Figure (17 ).
I then introduced larger mutations by replacing several residues with alanine (AAAAAA) and also tested additional mutations such as proline substitutions. Despite these modifications, the predicted structures remained largely similar to the original model, with only minor local differences, as shown in Figure (18 ). Overall, these results suggest that the protein fold is relatively resilient to moderate sequence mutations.
Figure 17: Making a mutation by changing one letter from P to A in the amino acid sequence.
Figure 18: Making a mutation by changing a large segment of the amino acid sequence.
The heatmap shows the probability of different amino acids at each position in the protein. The yellow regions indicate higher probabilities, meaning the model strongly prefers those residues at those positions. For example, at position 58 leucine (L) has a probability of about 0.91, and at position 36 proline (P) has a probability of about 0.92. In contrast, the green regions show lower probabilities, meaning those positions are more flexible and can tolerate different amino acids. This suggests that some residues are more important for maintaining the protein structure while others are less constrained.
Figure 19: The heatmap shows the probability of different amino acids at each position in the protein.
The sequence generated by ProteinMPNN was folded using ESMFold and compared to the original structure. The predicted structure shows a very similar overall fold, with helices appearing in similar positions. Although there are small differences in some regions, the general shape of the protein remains the same. This suggests that the designed sequence is compatible with the same backbone structure.
Figure 20: The Protein sequence generated by ProteinMPNN
But UniProt includes the starting M, so the alanine that changes is the 5th residue in the sequence.
1 M
2 A
3 T
4 K
5 A ← this is the one that changes
6 V
We change from this MATKAV to this MATKVV
The new sequence: MATKVVCVLKGDGPVQGIINFEQKESNGPVKVWGSIKGLTEGLHGFHVHEFGDNTAGCTSAGPHFNPLSRKHGGPKDEERHVGDLGNVTADKDGVADVSIEDSVISLSGDHCIIGRTLVVHEKADDLGKGGNEESTKTGNAGSRLACGVIGIAQ
Figure 1: The original length of amino acids.
Figure 2: The length 12 amino acids conditioned on the mutant SOD1 sequence.
Figure 3: Generation of four peptides of length 12 amino acids conditioned on the mutant SOD1 sequence.
Perplexity reflects the model’s confidence in the generated peptide sequences. Lower perplexity values indicate that the sequence is more probable according to the language model. Among the generated candidates, WRPAVVVAHWE had the lowest perplexity (11.49), suggesting that it is the most confident peptide predicted by PepMLM.
Part 2: Evaluate Binders with AlphaFold3
To evaluate the predicted binders, the mutant SOD1 sequence containing the A4V mutation was submitted to the AlphaFold server together with each peptide as separate chains to model the protein–peptide complex. The predicted interaction confidence was assessed using the ipTM score. The peptides generated showed ipTM values between about 0.30 and 0.36. When visualizing the structures, the peptides appear to bind on the surface of the SOD1 protein, mainly around the β-barrel region, and some approach the N-terminal area where the A4V mutation is located. The interactions appear surface-bound rather than buried inside the protein. The known SOD1-binding peptide produced an ipTM of about 0.31, while one PepMLM-generated peptide showed a slightly higher value (~0.36), suggesting a potentially stronger predicted interaction.
Part 3: Evaluate Properties of Generated Peptides in the PeptiVerse
Annealing temperature depends mainly on primer Tm (melting temperature), GC content, and primer length. Tm is the temperature at which half of the DNA strands separate. Higher GC content increases Tm because GC bonds are stronger. If the temperature is too low, primers bind nonspecifically, and if it is too high, they fail to bind.
https://academic.oup.com/nar/article/18/21/6409/2388653?login=false
PCR and restriction digestion both generate linear DNA fragments, but they differ in approach. PCR amplifies a specific DNA region using primers and allows modification of sequences, such as adding overlaps or mutations. Restriction digestion uses enzymes to cut DNA at specific recognition sites, producing defined fragments without amplification.
PCR is more flexible and useful when designing constructs or preparing fragments for Gibson assembly. Restriction digestion is more straightforward but depends on the availability of suitable restriction sites in the DNA sequence.
To make sure the DNA fragments work for Gibson assembly, the main thing is that they have overlapping sequences, usually around 20–40 base pairs. These overlaps are not random, they need to match exactly between adjacent fragments so they can base-pair during the reaction. In practice, these overlaps are usually added through primer design during PCR.
Another important point is using a high-fidelity polymerase, because even small mutations in the overlap region can prevent proper assembly. It’s also a good idea to check fragment size on a gel and confirm sequence accuracy before moving forward. If the overlaps are not correct or the DNA quality is poor, the assembly simply won’t work, since Gibson relies entirely on sequence homology rather than restriction sites.
https://www.nature.com/articles/nmeth.1318
From what I understood in the protocol, the plasmid doesn’t enter the cells through any active mechanism. The cells are first made competent using calcium chloride, which helps reduce the repulsion between the DNA and the membrane. Then during heat shock, the sudden temperature change kind of destabilizes the membrane and creates temporary openings, so the DNA can slip inside.
So it’s really more of a physical process rather than something controlled by the cell. After that, the cells recover and start expressing the plasmid. This also explains why transformation efficiency depends a lot on how well the cells were prepared.
https://www.sciencedirect.com/science/article/abs/pii/S0022283683802848?via%3Dihub
Golden Gate Assembly is different from Gibson because it depends on restriction enzymes instead of sequence homology. It uses Type IIS enzymes like BsaI, which cut outside of their recognition site. This is useful because it lets you design specific overhangs, so different fragments can attach to each other in a defined order.
What I found interesting is that digestion and ligation happen in the same reaction, so the system basically cycles between cutting and joining until the correct construct forms. Also, the recognition sites are removed during the process, so the final DNA doesn’t contain extra unwanted sequences.
This makes it especially useful when assembling multiple fragments at once, since you can control how everything connects.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0003647