Week 1 HW: Principles and Practices

cover image cover image

Question 1

First, describe a biological engineering application or tool you want to develop and why. This could be inspired by an idea for your HTGAA class project and/or something for which you are already doing in your research, or something you are just curious about.

Answer 1

I would like to expand on my project on Unreal Engine API for brain-on-a-chip platforms that was presented at NeurIPS 2025 (https://openreview.net/forum?id=BroaBkQAGa). The project proposes to build an API between living neurons interfaced with microelectrode arrays and virtual gaming environments, so that researchers and designers can use this environment to visualize spiking behavior across MEA channels, and to use reinforcement learning algorithms within the game environment to train neuronal cultures as game agents.

I’m currently collaborating with Cortical Labs to use CL-1 to connect via UDP to design closed-loop real-time visualization systems at the National Communication Museum in Melbourne. To start the loop I’ve sent in blob tracking data for Cl1 to process. The spikes from the CL1 are then streamed to Unreal Engine so that the neuronal activity can be used to transform agent parameters. https://ncm.org.au/exhibitions/cortical-labs https://jennleung.xyz/corticallabs

Question 2

Next, describe one or more governance/policy goals related to ensuring that this application or tool contributes to an “ethical” future, like ensuring non-malfeasance (preventing harm). Break big goals down into two or more specific sub-goals. Below is one example framework (developed in the context of synthetic genomics) you can choose to use or adapt, or you can develop your own. The example was developed to consider policy goals of ensuring safety and security, alongside other goals, like promoting constructive uses, but you could propose other goals for example, those relating to equity or autonomy.

Answer 2

One of the main objectives of this project is that it provides an open playground for benchmarking open-source and non-standardized brain-on-a-chip platforms. As we speculate these systems to become democratized and decentralized, there will spawn many different configurations of physical/ neural assemblies with advances in MEA designs, bioprinting technologies, and microfluidic platforms. Therefore, it is important to supporting 1) benchmarking integrity and reproducibility, for example, how do we measure spiking activity across different systems? How do we make sure experiments are scientifically meaningful? How do we translate and deliver virtual environments to channels on different MEA geometries? 2) ensuring accessibility to indepdnent researchers, for example, writing software environments not only for proprietary technologies such as Cortical Lab’s CL1 or FinalSpark’s Neuroplatform. Governance here means committing to abstraction layers that treat CL1 as one implementation among many 3) responsible scalability across new substrates, for example, new substrates includes increasingly complex organoids or assembloids that should go through rigorous bioethical frameworks. 4) Sustainability & longevity of the substrates, there should be rate limitations so that cells aren’t overly stimulated and at risk of quick death.

Question 3

Next, describe at least three different potential governance “actions” by considering the four aspects below (Purpose, Design, Assumptions, Risks of Failure & “Success”). Try to outline a mix of actions (e.g. a new requirement/rule, incentive, or technical strategy) pursued by different “actors” (e.g. academic researchers, companies, federal regulators, law enforcement, etc). Draw upon your existing knowledge and a little additional digging, and feel free to use analogies to other domains (e.g. 3D printing, drones, financial systems, etc.).

For each governance action, address:

  • Purpose: What is done now and what changes are you proposing?
  • Design: What is needed to make it “work”? (including the actor(s) involved - who must opt-in, fund, approve, or implement, etc)
  • Assumptions: What could you have wrong (incorrect assumptions, uncertainties)?
  • Risks of Failure & “Success”: How might this fail, including any unintended consequences of the “success” of your proposed actions?

Answer 3

  1. Benchmarking metadata across different brain-on-a-chip platforms

Purpose: Currently there are multiple commercial/ proprietary brain-on-a-chip platforms such as Cortical Labs’ CL1 and FinalSpark’s Neuroplatform, but there are no standardizations or comparisons metadata of these systems. I am proposing to create a metadata of existing platforms/ systems and develop an open access metadata standard that documents different MEA geometries, channel count, substrates,

Design: Map out a group of academic researchers who have been working on organoid intelligence/ synthetic bioengineered intelligence standardization, and manufacturers such as MaxWell Biosystems, Cortical Labs, etc., join community labs or open-source groups on open-source resesarch. In terms of implementations, I will need to consult all these groups to create an UE plugin that responds to their needs. It would be great to apply for AHRC/ UKRI grants.

Assumptions: This action assumes that all parties are happy to share their manual or manufacturing details, however, some of this data might be protected under NDA.

Risks of Failure and Success: There’s a high chance the open-source projects will grow exponentially, making this metadata impossible to manage at scale.

  1. Developing stimulation protocols at API layer

Purpose: Since there are many different types of brain-on-a-chip platforms, each company/ lab has different protocols of stimulating and recording these systems. It would be great to propose a stimulation protocol that is initiated by the API/ game environment.

Design: Study the stimulation protocols across different systems and apply appropriate time scales, rate limits, response rates, and stimulation/ discretization patterns so we can formalize communication with living neurons.

Assumptions: The biggest assumption here is likely that standardization might not be applicable or scientifically meaninful across different biological systems because of biological variability, they vary by culture, by physical assembly and MEA type.

Risk of Failure: Overstandardization might lead to less meaningful scientific experiments. Certain rate limits and standardization might fail to recognize the plasticity and assumes this technology to not evolve. Constant review and negotiations are needed to make this option work!

  1. Developing a wide range of benchmarking gaming environments/ templates

Purpose: Cortical Labs has compared living neurons against RL algorithms in Pong. I would like to expand on this to develop something adjacent to OpenAI Gym, so that we can create environments for synthetic bioengineered intelligence.

Design: These might include standardized task environments that allow researchers to compare RL agent performance on identical tasks, or have multiplayer/ team battles between two systems for performance evaluations. Standardized environments ensure that experimental results are reproducible and comparable across institutions.

Assumptions: The templates assume that this variability can be characterized statistically across many runs, but if variability is too high, the benchmarks may not be informative.

Risks of Failure and Success: Templates might restrict certain experiment design, so it would be important to balance standardization/ benchmarking vs openness.

Question 4

Next, score (from 1-3 with, 1 as the best, or n/a) each of your governance actions against your rubric of policy goals. The following is one framework but feel free to make your own:

Answer 4

(Fill in the table with your scores for each option.)

Does the option:Option 1Option 2Option 3
Enhance Biosecurity
• By preventing incidents213
• By helping respond132
Foster Lab Safety
• By preventing incident312
• By helping respond123
Protect the environment
• By preventing incidents213
• By helping respond213
Other considerations
• Minimizing costs and burdens to stakeholders231
• Feasibility?213
• Not impede research123
• Promote constructive applications312

Question 5

Last, drawing upon this scoring, describe which governance option, or combination of options, you would prioritize, and why. Outline any trade-offs you considered as well as assumptions and uncertainties. For this, you can choose one or more relevant audiences for your recommendation, which could range from the very local (e.g. to MIT leadership or Cambridge Mayoral Office) to the national (e.g. to President Biden or the head of a Federal Agency) to the international (e.g. to the United Nations Office of the Secretary-General, or the leadership of a multinational firm or industry consortia). These could also be one of the “actor” groups in your matrix.

Answer 5

Option 2 seems to be the most well-considered option because it implies and builds on fundamental knowledge of other research institutions practice and existing start-up solutions. It’s the governance action that most directly addresses the biological welfare and safety concerns that are unique to this field. Since we can’t retroactively un-damage a neuronal culture, having safety protocols embedded at the API layer is the most impactful intervention point.

Question 6

Reflecting on what you learned and did in class this week, outline any ethical concerns that arose, especially any that were new to you. Then propose any governance actions you think might be appropriate to address those issues. This should be included on your class page for this week.

Answer 6

I am interested in the concept of pharmakon - that for research to be really successful also comes at the cost of creating additional problems such as bioweapons or disregulation of illegal substance (biosecurity). The governance actions I am interested in are perhaps on the cloud/ API side of things, around how we may be able to apply trust-based connectivity from software design to bio-design. For example, cloud infrastructure already uses trust models and I think we could potentially learn from internet architecture to look at regulating or modeling remote access to living biological systems.

Homework Questions from Professor Jacobson (Lecture 2 slides)

Question 7

Nature’s machinery for copying DNA is called polymerase. What is the error rate of polymerase? How does this compare to the length of the human genome. How does biology deal with that discrepancy?

Answer 7

Error Rate: 1:106 Throughput Error Rate Product Differential: ~108 The human genome is 3.2 billion letters long and will roughly make 3200 mistakes. Biology can reduce the error rate by shifting mismatched pair and tries again with the corrent nucleotide.

Question 8

How many different ways are there to code (DNA nucleotide code) for an average human protein? In practice what are some of the reasons that all of these different codes don’t work to code for the protein of interest?

Answer 8

There is an astronomical number of ways to code an average human protein. Each amino acid has 3 codons available and there’s more than 300 amino acids long for an average human protein. But some codons have many matching tRNAs that not all codons apply, this means some ribosomes can fall off or misread which leads to less protein produced.

Homework Questions from Dr. LeProust (Lecture 2 slides)

Question 9

What’s the most commonly used method for oligo synthesis currently?

Answer 9

Phosphoramidite method by Caruthers

Question 10

Why is it difficult to make oligos longer than 200nt via direct synthesis?

Answer 10

Chemistry causes cumulative damage and hits a wall around 200 nucleotides.

Question 11

Why can’t you make a 2000bp gene via direct oligo synthesis?

Answer 11

1 in 3,000 bp error rate. There’s too many errors distributed and become unpurifiable. It requires good sequencing analysis and fragment analysis as well as uniform distribution across all oligos.

Homework Question from George Church (Lecture 2 slides)

Choose ONE of the following three questions to answer; and please cite AI prompts or paper citations used, if any.

Option A – Question 12

[Using Google & Prof. Church’s slide #4] What are the 10 essential amino acids in all animals and how does this affect your view of the “Lysine Contingency”?

Answer 12 (if you choose Option A)

Histidine, Isoleucine, Leucine, Lysine, Methionine, Phenylalanine, Theronine, Tryptophan, Valine.

The lysine contingency from Jurassic Park is irrelevant here as all animals already cannot synthesize lysine and require consuming food.

Option B – Question 13

[Given slides #2 & 4 (AA:NA and NA:NA codes)] What code would you suggest for AA:AA interactions?

Answer 13 (if you choose Option B)

(Write your answer here.)

Option C – Question 14 (Advanced students)

[(Advanced students)] Given the one paragraph abstracts for these real 2026 grant programs sketch a response to one of them or devise one of your own: https://arpa-h.gov/explore-funding/programs/boss https://www.darpa.mil/research/programs/smart-rbc https://www.darpa.mil/research/programs/go

Answer 14 (if you choose Option C)

(Write your answer here.)