Week 2 HW: Pre lecture

  1. Nature’s machinery for copying DNA is called polymerase. What is the error rate of polymerase? How does this compare to the length of the human genome. How does biology deal with that discrepancy?

response:

The error rate of polymerase is one per million bases. Compared to the human genome which is in total 3.2 GBP, which means approximately 3000 errors per genome. The human body deals with this error through a few ways: 1) polymerase proofreading: this utilizes 3’-5’ exonuclease which splices out the wrong mismatched nucleotide from the 3’ end of the strand. This process happens while the DNA replication is active. 2) Mismatch repair: after DNA is finished replicating, proteins like MutS read the strand for incorrect pairings that have escaped the earlier proofreading. It, similarly, cuts out the incorrect nucleotide before resynthesizing that section.
  1. How many different ways are there to code (DNA nucleotide code) for an average human protein? In practice what are some of the reasons that all of these different codes don’t work to code for the protein of interest?

    The average human gene for a protein is 1036 Bp according to Professor Jacobson. And there are four nucleotides that make up the human genome (A,T,C,G). So, if we want to know all the possible ways we can code for a single gene, we must raise 4 to the power of 1036, which ultimately gives us 10^623 possibilities. There are multiple reasons why not all codes work efficiently to code for certain proteins. Codon bias, for example, entails that rare codon is slower to translate due to a different tRNA availability. Though the tRNA is always there, it might take longer to translate a rare codon which would stagger the process. mRNA can be slower to recognize the start site in the presence of strong secondary structures.

  2. What’s the most commonly used method for oligo synthesis currently?

    As described in professor Leproust’s slides, solid phase phosphoramidite oligonucleotide synthesis.

  3. Why is it difficult to make oligos longer than 200nt via direct synthesis?

    Because making oligos longer than 200 nt suggests to a large margin of error. The process is already error prone as a single cycle is not at 100% efficiency, this lessens the yield as the sequences get longer. Why cant you make a 2000bp gene via direct oligo synthesis? At this stage of developing oligo synthesis, the current limit is 700 nucleotides.

  4. [Using Google & Prof. Church’s slide #4] What are the 10 essential amino acids in all animals and how does this affect your view of the “Lysine Contingency”?

    Lysine, leucine, isoleucine, valine, threonine, methionine, phenylalanine, tryptophan, histidine, arginine. The lysine contingency leverages animals’ reliance on lysine. While it is plausible, given that animals need it for survival. But it is highly unlikely that the dinosaurs (animals) wouldn’t find it available in their environment given how readily available it is in the ecosystem. My biggest issue with this idea is that lysine absence (again, farfetched) would take weeks before the animal dies from it.