Table of Contents 1) Biological engineering application / tool 2) Governance / policy goals 3) Governance actions 4) Risk analysis 5) Scoring governance actions 6) Prioritization recommendation 7) Reflection 8) Project + governance overview Homework Questions from Professor Jacobson Homework Questions from Dr. LeProust Homework Question from George Church 1) Biological engineering application / tool I want to develop (and why) Application / tool: Growing interactive surfaces from bacterial cellulose I already know how to grow 3D artifacts in bacterial cellulose (BC). In this project, I want to develop a biological functionalization workflow that turns grown BC artifacts into interactive surfaces, measured through impedance-based sensing (e.g., tactile volumetric response, sensitivity to pressure), with minimal embedded electronics.
Table of Contents 0. Basics of Gel Electrophoresis
Benchling and In-silico Gel Art
Gel Art Restriction Digests and Electrophoresis
3.1 Choose Your Protein
Protein sequence (from Supplementary Data 1) Structural micro-analysis: copper coordination and catalysis Why this protein matters 3.2 Reverse translate
3.3 Codon optimization
3.4 You have a sequence now what
table of matter
Assignment: Python Script for Opentrons Artwork https://opentrons-art.rcdonovan.com/?id=sux110hip535fnx
Statement of Intent — Why Reaction–Diffusion I have been playing with reaction–diffusion algorithms for a long time. I keep returning to them out of curiosity — both for their history (their direct link to Alan Turing’s work on emergent patterns) and for what they suggest in terms of morphogenesis.
1) Biological engineering application / tool I want to develop (and why)
Application / tool: Growing interactive surfaces from bacterial cellulose
I already know how to grow 3D artifacts in bacterial cellulose (BC). In this project, I want to develop a biological functionalization workflow that turns grown BC artifacts into interactive surfaces, measured through impedance-based sensing (e.g., tactile volumetric response, sensitivity to pressure), with minimal embedded electronics.
The goal is not to integrate electronic components into the object, but to make the material itself behave as an interface. Electrodes are used only for readout and characterization, not as active elements.
Concretely, this toolchain would include:
Biological functionalization workflow Modulating the extracellular matrix during growth (for example through protein secretion into the BC network) to tune impedance-based electrical behavior and enable material-level interaction.
Measurement interface (readout only) A simple electrode setup and impedance readout used to validate, compare, and characterize material responses across different growth and functionalization conditions.
Optional integration (if time allows) Integrating functionalization into my existing 3D growth workflow (molds, scaffolds, and growth conditions), so objects can be grown already functionalized rather than treated after fabrication.
Why this matters
Most interactive devices are built by assembling electronics into synthetic materials. This project proposes a different paradigm:
Instead of fabricating objects and then adding interactivity, objects are cultivated so that interactivity emerges from biological growth and organization.
This matters for several reasons:
Sustainable interfaces: biodegradable material base, low-energy fabrication, and the possibility of metabolic degradation or recycling.
New interface paradigms: artifacts become processual and time-based, where growth history and material state shape interaction.
Research tool: impedance acts as a lens to study how morphology, hydration, and growth dynamics become measurable signals.
2) Governance / policy goals for an ethical future
Inspired by the iGEM Safety framework, the ethical challenges of this project are framed around biosafety, environmental responsibility, and responsible use of engineered living materials. Because this work lowers the barrier to growing functional living artifacts, governance should guide not only what is built, but how materials are grown, handled, and disposed of.
Goal A — Ensure biosafety and non-malfeasance during cultivation and use
Although bacterial cellulose–producing organisms are generally considered low-risk, improper handling or uncontrolled experimentation could lead to contamination or unintended exposure.
Sub-goals
A1. Safe handling and containment Ensure cultivation, functionalization, and testing are conducted using appropriate hygiene, containment, and separation practices, especially when combining multiple biological agents.
A2. Transparency about biological modification Maintain a clear distinction between engineered biological components and non-biological elements (e.g., electrodes used only for readout).
Goal B — Prevent environmental harm and unintended release
A key motivation of this work is sustainability: growing interactive surfaces from living matter that can eventually be degraded or recycled. This benefit only holds if environmental risks are managed.
Sub-goals
B1. Controlled disposal and deactivation
Ensure BC artifacts, growth media, and byproducts are deactivated or degraded before disposal, preventing accidental release into wastewater or compost systems.
B2. Avoid persistent or toxic additives
Favor biological or biodegradable functionalization methods over persistent chemicals or heavy metals to preserve viable end-of-life pathways.
Goal C — Promote responsible development of living interactive interfaces
Because this project treats living materials as sensing and computational substrates, governance should guide how such systems are framed and used, particularly outside the lab.
Sub-goals
C1. Prevent misleading claims Avoid presenting biologically functionalized materials as autonomous, intelligent, or fully controllable systems.
C2. Limit sensing to material interaction Encourage applications focused on material state (touch, pressure, hydration).
3) Governance actions (comparative overview)
The table below summarizes three governance actions addressing biosafety, environmental impact, and responsible use of grown bacterial cellulose artifacts. Each action is evaluated across its purpose, design, assumptions, and potential risks.
Governance Action
Purpose (What changes?)
Design (How it works / Who acts?)
Key Assumptions
Risks of Failure & “Success”
Option 1 — Biological process & provenance documentation
Current documentation emphasizes performance over biological process. This action requires explicit documentation of organism source, biological modification, and separation between biological and technical components.
• Actors: instructors, research labs, journals, funding agencies • Lightweight 1–2 page template integrated into coursework or reports • Descriptive, not approval-based
Transparency reduces misuse, confusion, and overclaiming, and improves accountability if issues arise.
Failure: becomes a checkbox exercise with little reflection. Success risk: excessive formalization discourages exploratory research. Mitigation: keep requirements descriptive and allow uncertainty.
Living materials are often treated as benign, leading to inconsistent disposal. This action establishes mandatory deactivation and disposal procedures for living artifacts and growth media.
Improper disposal is the most likely real-world risk, especially as production scales or decentralizes.
Failure: users bypass protocols due to inconvenience or lack of enforcement. Success risk: overly strict rules push work into informal or unregulated spaces. Mitigation: make compliance easy and well-supported.
Option 3 — Readout-limited interface design norms
Biological sensing systems can drift toward surveillance or control narratives. This action promotes norms framing living materials as readout-based material interfaces, not monitoring systems.
• Actors: researchers, designers, companies • Explicit framing in documentation and demos • Focus on local, material-state signals
Early design norms can influence long-term application trajectories and public expectations.
Failure: norms ignored in commercial or extractive contexts. Success risk: may limit some beneficial applications. Mitigation: scope norms to public, non-clinical, and creative deployments.
4) Potential risks, use, misuse, and governance considerations
Option 2 addresses the most immediate and likely harm: environmental release and waste.
Option 1 supports accountability and response without limiting research.
Option 3 addresses longer-term risks if the technology scales.
7) Reflection
This exercise highlighted that the main risks of growing interactive living artifacts arise from success and scale, not malicious intent. Even biodegradable materials can become harmful when produced or discarded in large quantities. Another key insight is how easily biological sensing can be reframed beyond material interaction.
Governance should therefore focus on practice, documentation, and lifecycle management, rather than strict prohibition.
At this stage, the cultivation of interactive BC artifacts should remain confined to controlled research and educational contexts, and not be deployed in public or commercial environments without additional review.
Strategies to ensure an ethical biological future (project-integrated)
The table below summarizes the concrete strategies embedded in my final project to ensure it contributes to an ethical biological future. Rather than external rules, these strategies are implemented through design choices, constraints, and documentation practices.
Strategy
Ethical Principle
What Risk It Addresses
How It Is Implemented in the Project
Why This Matters
Readout-first interface design
Observation over control
Instrumentalization of living systems; drift toward surveillance or behavioral monitoring
Electronics are used only for impedance readout; no actuation or feedback loops acting on the living material; signals framed as material-state indicators
Limits coercive or extractive uses of living matter and preserves biological agency
Scale and context limitation
Responsibility through constraint
Environmental harm and biosafety risks arising from success and scale
Project framed as research/educational tool; growth and testing limited to controlled lab or studio environments; public or commercial deployment explicitly excluded without further review
Prevents premature deployment and unmanaged scaling of living interfaces
Lifecycle-aware design
End-of-life accountability
Ecological disruption from disposal or accumulation of living materials
Preference for biologically degradable functionalization; explicit deactivation and disposal protocols included in the workflow
Sustainability is addressed across the full material lifecycle, not only fabrication
Transparency about biological modification
Epistemic responsibility
Overclaiming, black-box narratives, and misleading representations of “living intelligence”
Clear distinction between engineered biological processes and technical readout systems; documentation of uncertainty and variability
Builds trust and prevents misuse driven by misunderstanding or hype
Reflexive governance practice
Ethics as an evolving process
Static rules becoming obsolete as techniques evolve
Periodic reassessment of risks; use of lightweight governance tools (documentation, norms, constraints) rather than fixed prohibitions
Allows ethical considerations to evolve alongside the project and its capabilities
DNA polymerase, the molecular machinery responsible for copying DNA, has a non-zero error rate. As presented in the lecture slides, the error rate of an error-correcting DNA polymerase is approximately 1 error per 10⁶ base incorporations. While this error rate is very low at the scale of individual nucleotides, it becomes significant when compared to the size of the human genome, which is on the order of 3.2 × 10⁹ base pairs. If replication relied solely on polymerase accuracy, this discrepancy would imply the accumulation of thousands of errors during each complete genome replication, which would be incompatible with stable inheritance and organismal viability.
Biology resolves this apparent mismatch through multiple layers of error correction rather than relying on perfect synthesis. First, many DNA polymerases include proofreading activity (3′→5′ exonuclease) that detects and removes incorrectly incorporated bases during replication. Second, additional post-replication repair systems, such as mismatch repair pathways (e.g., the MutS system shown in the slides), identify and correct errors that escape polymerase proofreading. Together, these layered mechanisms dramatically reduce the effective error rate of DNA replication, allowing large genomes like the human genome to be copied reliably despite the intrinsic imperfection of polymerase activity.
Question 2 — Coding redundancy and why most theoretical DNA codes do not work in practice
Due to the redundancy of the genetic code, an average human protein can, in theory, be encoded by an extremely large number of different DNA sequences. As shown in the slides, an average human protein is approximately 1036 base pairs long, corresponding to roughly 345 amino acids. Because many amino acids can be encoded by multiple synonymous codons, the number of possible nucleotide sequences that map to the same amino acid sequence grows combinatorially. In principle, this means there are astronomically many distinct DNA sequences that could encode the same protein sequence.
In practice, however, most of these theoretically valid codes do not function effectively in living systems. The lecture slides highlight several biological constraints that limit which sequences work. Certain DNA or RNA sequences form secondary structures with unfavorable free energy that interfere with transcription or translation. GC content bias can destabilize sequences or alter expression efficiency. Additionally, synthesis and assembly processes have their own error profiles, making some sequences more fragile than others. Finally, translation depends on interactions with cellular machinery such as ribosomes and tRNA pools, meaning that codon usage and sequence context matter beyond simple amino acid encoding. As a result, although the genetic code is redundant in theory, only a narrow subset of possible DNA sequences reliably produce functional proteins in practice.
Homework Questions from Dr. LeProust:
1. What’s the most commonly used method for oligo synthesis currently?
The most commonly used method for oligonucleotide synthesis today is solid-phase phosphoramidite chemical synthesis, originally developed by Caruthers in the early 1980s and still the industry standard. In this approach, DNA is synthesized base by base on a solid support (typically controlled pore glass or functionalized silica). Each synthesis cycle consists of four repeated chemical steps: coupling of a phosphoramidite nucleotide, capping of unreacted chains, oxidation to stabilize the backbone, and deprotection (deblocking) to expose the next reactive site. This cycle is repeated sequentially to build the desired oligo.
This method dominates because it is highly automatable, scalable, and compatible with parallelization, especially in modern platforms such as silicon-based microarrays (e.g., Twist Bioscience). However, it is fundamentally a stepwise chemical process, meaning that each added nucleotide has a non-zero failure rate. Even with very high per-step efficiencies (>99.5%), errors accumulate with length, which directly constrains how long oligos can be synthesized reliably. This intrinsic accumulation of errors explains many of the downstream limitations discussed in the following questions.
2. Why is it difficult to make oligos longer than ~200 nt via direct synthesis?
The primary difficulty in synthesizing oligos longer than ~200 nucleotides arises from the cumulative error rate of stepwise chemical synthesis. Each coupling step has a small probability of failure (incomplete coupling, side reactions, or truncation). While a single error is unlikely at short lengths, the probability of obtaining a full-length, error-free molecule drops exponentially as the number of synthesis cycles increases. Beyond ~200 nt, the fraction of correct full-length molecules becomes very low, even if average synthesis efficiency remains high.
In addition to error accumulation, sequence-dependent effects further complicate long oligo synthesis. Regions with high or low GC content, homopolymers, inverted repeats, or strong secondary structures (e.g., hairpins) reduce coupling efficiency and increase truncation or deletion events. Purification also becomes more challenging: separating full-length products from truncated sequences is increasingly inefficient as length grows. As a result, while advances in chemistry have pushed this limit somewhat (e.g., validated synthesis of ~300–500 nt in controlled contexts), ~200 nt remains the practical and economical upper bound for routine, high-fidelity direct synthesis.
3. Why can’t you make a 2000 bp gene via direct oligo synthesis?
A 2000 bp gene cannot be synthesized directly because chemical oligo synthesis does not scale linearly with length. At that size, the compounded error rate would make the probability of producing even a single correct full-length molecule essentially negligible. Even if synthesis chemistry allowed chain extension to 2000 nt, the resulting product pool would be dominated by truncated, mutated, or rearranged sequences, rendering it unusable without extensive correction.
Instead, long genes are produced through a hierarchical assembly strategy: shorter oligos (typically 40–200 nt) are first synthesized, then enzymatically assembled using methods such as PCR-based gene assembly or ligation. These assembled fragments are subsequently cloned and sequence-verified, allowing error correction through selection rather than chemistry. This separation of concerns—chemical synthesis for short, precise building blocks, and biological processes for long-range assembly and error correction—is fundamental. It reflects a broader principle emphasized in the course: chemistry writes locally, biology validates globally. Direct chemical synthesis alone cannot replace this division of labor for long DNA constructs.
Homework Question from George Church:
Given the examples in slides #2 and #4, where biological systems use structured “codes” to mediate interactions between polymers (NA:NA via base pairing and AA:NA via translation and binding domains), an AA:AA interaction code would most plausibly rely on side-chain chemistry and spatial complementarity rather than a discrete symbolic alphabet.
Unlike nucleic acids, amino acids do not pair through a uniform geometry or hydrogen-bonding scheme. Instead, proteins interact through combinations of hydrophobicity, charge, polarity, aromatic stacking, and steric fit. Therefore, an AA:AA code is inherently analog and multidimensional, not digital. A reasonable “code” would be based on classes of side-chain interactions—for example, positively charged residues pairing with negatively charged ones, hydrophobic patches aligning with hydrophobic patches, or aromatic residues engaging in π–π stacking. This resembles a physicochemical code rather than a base-pair code.
In practice, such a code already exists implicitly in protein–protein interaction domains, coiled-coil motifs, antibody–antigen interfaces, and self-assembling protein systems. From a design perspective, an explicit AA:AA code could be engineered by constraining proteins to limited alphabets or repeat motifs, where interaction specificity emerges from repeating patterns of side-chain chemistry and geometry. This mirrors how TALE proteins use a simple amino-acid pair code to recognize DNA, but adapted to protein–protein recognition. Thus, AA:AA coding is best understood not as a lookup table, but as a designed interaction grammar grounded in chemistry and structure.
PROMPT :
you can found here the slides of Reading & Writing Life from Goerge Church. Normally I’ve to answers to one of this 2 questions (only one of those not the both) :
A. [Using Google & Prof. Church’s slide #4] What are the 10 essential amino acids in all animals and how does this affect your view of the “Lysine Contingency”?
B. [Given slides #2 & 4 (AA:NA and NA:NA codes)] What code would you suggest for AA:AA interactions?
So, I don’t know which one is easiest to answer at the moment. We will use the PDF to determine which questions are most likely to be contained within it and reduce the uncertainty. To do this, we will
Break down the problem into sub-elements.
Treat each of them with an explicit confidence level (0.0 - 1.0).
Verify by checking the logic, facts, completeness, possible biases, what could be done, and how to achieve the objective (determine which question is easiest to answer without being wrong and give the answer).
Synthesize and combine using weighted confidence levels.
If confidence is below 0.8, identify weaknesses and how we could proceed to achieve a better level. Formulate clear answers, the level of confidence, and key points for vigilance.
Gel electrophoresis is a fascinating process that allows DNA fragments to be separated according to size. The migration of fragments through agarose reveals invisible molecular differences as visible banding patterns. What interests me most is how information encoded in DNA becomes a spatial structure that can be interpreted visually.
1. Benchling and In-silico Gel Art
Following the protocol Gel Art: Restriction Digests and Gel Electrophoresis, I explored Benchling to simulate restriction enzyme digestion.
I imported Lambda DNA and simulated restriction digests using:
EcoRI
HindIII
BamHI
KpnI
EcoRV
SacI
SalI
At first, I was not entirely sure what I was doing — I explored the platform experimentally, clicking through the interface to understand how the enzymes cut and how the DNA fragments were segmented.
The origin of the fragment.
See the operons targetted.
How it could/ should looks on gel.
After generating fragment sizes, I moved to Ronan Donovan’s gel art tool:
There, I entered fragment sizes from Benchling to simulate gel patterns. Interestingly, some of the same enzymes were already available in the tool. I experimented with fragment combinations, spacing, and contrast to create structured visual patterns. I tried composing a geometric “invader”-like form inspired by Paul Vanouse’s aesthetic approach.
This exercise made clear how restriction logic can generate latent images embedded in genomic structure.
2. Gel Art Restriction Digests and Electrophoresis
Although I do not currently have access to a laboratory to run physical gels, I have previously seen Paul Vanouse’s Latent Figure Protocol artworks in person. A curator friend owns one and showed it to me during a bioart conference. Seeing the material presence of gel-based imagery deeply informed how I approached this in-silico exercise.
3.1 Choose Your Protein
Protein chosen: Tyrosinase (Tyr1) from Bacillus megaterium
I chose the tyrosinase Tyr1 enzyme from Bacillus megaterium because it directly enables the biosynthesis of eumelanin, a redox-active polymer with electronic and material properties.
This protein is particularly relevant to my project, which explores the biological functionalization of bacterial cellulose. By expressing Tyr1 in Komagataeibacter rhaeticus, the cellulose-producing bacterium can generate eumelanin within or around the extracellular matrix, thereby modifying the optical and potentially electroactive properties of the material.
Tyr1 can encode a macroscopic material property in the cell of Komagateibacter Rhaeticus.
This strategy has been experimentally validated in:
Walker et al., 2024 — Self-pigmenting textiles grown from cellulose-producing bacteria with engineered tyrosinase expression
The authors engineered K. rhaeticus strains expressing Tyr1 and demonstrated eumelanin production under buffered neutral pH conditions (Supplementary Data 1, 41587_2024_2194_MOESM1_ESM).
The paper is here 👉 https://www.nature.com/articles/s41587-024-02194-3
Details are in Supplementary information / Supplementary information / Supplementary Figs. 1–5, Tables 1–3 and Supplementary Data 1 and 2.
DNA (tyr1)
↓ transcription
mRNA
↓ translation
Tyrosinase enzyme
↓ catalytic activity
Melanin polymer
↓ integration into cellulose matrix
Modified impedance
↓
Human interaction (touch sensing)
Rather than maximizing conductivity, the goal is to explore how living metabolism can reorganize electrochemical properties within a self-grown material matrix. Maybe a hybrid approach combining enzymatic melanin production with controlled ionic modulation of the growth medium may provide a balance between biological emergence and measurable electrical response.
Impedance vs Conduction (for BC-based “electronic” biofilms)
Quick intuition
Conduction (DC): current flows through a continuous conductive path (like a wire). → You usually need a percolating network (CNT/graphene/metal paths).
Impedance (AC): the material reacts to an alternating signal. → It mixes resistance + capacitance + ionic diffusion, and is extremely sensitive to: hydration, ions, porosity, and microstructure.
My goal is a tactile / capacitive surface then impedance is usually the most realistic target.
Concept map (what you measure defines what you build)
flowchart TB
A[BC biofilm / pellicle]--> B{What do I measure}
B-->|DC or very low frequency| C[Conduction]
B-->|AC multi-frequency| D[Impedance]
C-->C1[Electronic transport]
C-->C2[Needs a percolating path]
C2-->C3[CNT graphene Ag nanowires PEDOT]
D-->D1[Resistance R]
D-->D2[Capacitance C double layer]
D-->D3[Ionic diffusion]
D-->D4[Strongly affected by water ions porosity]
D4-->D5[Good for touch pressure hydration sensing]
Structural Micro-Analysis: Copper Coordination and Catalysis
Tyrosinases are type-3 copper enzymes. Their catalytic activity depends on two copper-binding sites (CuA and CuB), each coordinated by conserved histidine residues. These histidine-rich motifs enable:
Binding of molecular oxygen
Oxidation of L-tyrosine to L-DOPA
Conversion to dopaquinone
Spontaneous polymerization into eumelanin
The presence of conserved histidines in the Tyr1 sequence reflects this copper-dependent catalytic architecture. The structure of the enzyme directly determines its ability to generate a redox-active polymer. Here, protein folding and metal coordination translate genetic code into material transformation.
Why This Protein Matters
1️⃣DNA → enzyme → material property
The tyr1 gene encodes an enzyme.
The enzyme catalyzes a redox reaction.
The reaction produces a polymer.
The polymer modifies material properties.
Thus:
Genetic sequence
→ enzyme structure
→ catalytic activity
→ polymer formation
→ macroscopic material pigmentation and electroactivity
This directly embodies the theme of “Reading & Writing Life.”
2️⃣ Minimal synthetic biology intervention
Unlike multi-gene systems (e.g., the curli operon), Tyr1:
Requires only one coding sequence
Does not modify the cellulose biosynthesis operon (bcsABCD)
Adds a new functional layer to the extracellular matrix
This makes it an elegant example of additive biological functionalization, where a single gene expands the material phenotype of a living system.
3️⃣ Connection to biofabrication and living materials
Walker et al. demonstrate patterned melanin production using optogenetic control of tyr1 expression.
This shows that:
Gene expression can spatially control material coloration
Biological programming can encode textile-level patterning
Material properties can be written through genetic regulation
This aligns directly with my research interest in growing functionalized 3D artifacts and interactive living interfaces.
Beyond color: material programming
In this project, the goal is not simply to encode color in DNA. The visible pigmentation is only a surface effect. What is actually encoded is a redox-active polymer production pathway inside living matter.
Melanin modifies:
local redox properties,
hydration-dependent conductivity,
and potentially the impedance behavior of the cellulose matrix.
So pigmentation here is not aesthetic encoding, but electrochemical encoding. The gene does not just change appearance — it changes how the material behaves electrically. In that sense, DNA is not only encoding a protein, but indirectly programming a material state.
3.2 Reverse Translate
Protein (amino acid) sequence → DNA (nucleotide) sequence
Because of codon degeneracy, there is not a single unique DNA sequence for a given protein sequence. Many different nucleotide sequences can encode the exact same amino acid sequence. To reverse-translate Tyr1, I used a reverse-translation approach (equivalent to common online reverse-translation tools) by selecting one plausible codon per amino acid (a “standard/high-frequency codon” strategy).
The protein sequence used is Tyr1 from Bacillus megaterium as reported in Walker et al. 2024 (Supplementary Data 1). :contentReference[oaicite:1]{index=1}
Reverse-translated nucleotide sequence (one valid coding DNA sequence)
During this process, I realized that reverse translation is not deterministic. The genetic code is degenerate: multiple DNA sequences can encode the same protein. Choosing a DNA sequence is therefore not neutral. Codon usage can influence expression level, folding efficiency, and metabolic burden in the host organism. I am only beginning to understand how codon bias shapes expression outcomes. What appears to be a simple reverse translation step is in reality a probabilistic optimization problem shaped by cellular context.
3.3 Codon Optimization
Why codon optimization is necessary
Although multiple DNA sequences can encode the same protein due to codon degeneracy, not all synonymous codons are used equally in different organisms.
If a gene uses codons that are rare in the host organism:
Translation can be slow
Ribosomes may stall
Protein yield can decrease
Misfolding may increase
Codon optimization ensures that the nucleotide sequence:
Thus, codon optimization improves its expression in a chosen host.
Organism chosen for optimization
I chose to optimize the tyr1 gene for expression in:
Komagataeibacter rhaeticus
This organism is the cellulose-producing bacterium used in Walker et al. (2024) and is central to my broader project on biologically functionalized bacterial cellulose.
Because K. rhaeticus has a relatively high GC content and a codon usage profile distinct from Bacillus megaterium, optimization is required to:
Improve translation efficiency
Ensure stable protein production
Support effective melanin biosynthesis
Additional design constraints
Following best practices (e.g., Twist Bioscience optimization tools), the optimized sequence was also designed to:
Avoid Type IIS restriction enzyme recognition sites:
Now that we have a codon-optimized DNA sequence encoding Tyr1, the next step is to express the protein.
How can this protein be produced?
There are two main strategies:
A. Cell-dependent expression (in vivo)
In this approach, the DNA sequence is inserted into a plasmid vector containing:
A promoter (constitutive or inducible)
A ribosome binding site (RBS)
The coding sequence (our optimized tyr1)
A transcription terminator
The plasmid is then introduced into a host organism (e.g., Komagataeibacter rhaeticus).
What happens biologically?
1️⃣ Transcription RNA polymerase binds to the promoter and transcribes the DNA sequence into messenger RNA (mRNA).
2️⃣ Translation Ribosomes bind to the mRNA at the ribosome binding site.
The mRNA codons (triplets) are read.
tRNAs deliver amino acids corresponding to each codon.
The ribosome assembles the amino acids into the Tyr1 protein.
3️⃣ Protein folding and function The translated Tyr1 protein folds into its 3D structure.
Copper ions bind to the active site.
The enzyme becomes catalytically active.
In the case of Tyr1, the enzyme then catalyzes:
L-tyrosine → L-DOPA → dopaquinone → eumelanin
Thus, genetic information becomes a catalytic material transformation.
B. Cell-free expression (in vitro)
Alternatively, the DNA sequence can be expressed using a cell-free transcription-translation system.
These systems contain:
RNA polymerase
Ribosomes
tRNAs
Amino acids
Energy sources
The optimized DNA is added directly to the mixture.
Transcription and translation occur outside living cells.
Advantages:
Rapid prototyping
No need for transformation
Easier control of conditions
For exploratory material functionalization, cell-free systems can serve as a fast validation step before in vivo implementation.
From DNA to Protein: Summary
DNA (double-stranded) → transcription → mRNA (single-stranded) → translation → Protein (amino acid chain)
This is the operational flow of the Central Dogma.
From DNA to material interface
If we follow the chain of transformations:
DNA (tyr1 gene) → mRNA → Tyrosinase enzyme → Melanin polymer formation → Integration into the cellulose matrix → Modified impedance behavior → Human touch interaction
This illustrates that writing DNA is not only about modifying organisms. It is about programming how matter self-organizes over time.
My interest is not genetic novelty for its own sake, but how cellular metabolism reorganizes matter into functional biofilms.
4. Prepare a Twist DNA Synthesis Order
4.1. Create a Twist account, and Benchling account
To visually represent the genetic architecture, the construct can also be recreated using:
🔗 https://sbolcanvas.org SBOL Canvas enables graphical design of:
Promoter → RBS → CDS → Terminator This helps communicate the logic of the construct clearly and aligns with synthetic biology design standards.
5. DNA Read/Write/Edit
5.1 DNA Read
(i) What DNA would I want to sequence and why?
I would sequence synthetic DNA used for digital data storage.
Instead of reading DNA from living organisms, I am interested in DNA that has been artificially synthesized to encode digital information (text, images, archives). In this case, DNA is not used as biology — it is used as a storage material.
Why this matters
DNA can store enormous amounts of information in a very small volume and can remain stable for long periods of time. Unlike hard drives or servers, it does not depend on electronic infrastructure.
Sequencing such DNA means reading information written into molecules. It turns a biological technology into a digital decoding tool.
This connects biology with computation and raises important questions about how information can be stored, preserved, and accessed in the future.
(ii) Which sequencing technology would I use and why?
I would use Illumina sequencing.
Why Illumina?
DNA used for digital storage is usually made of short fragments. Illumina sequencing works very well for short pieces of DNA and provides very high accuracy, which is essential when decoding stored data.
Small sequencing errors could corrupt the recovered file, so accuracy is more important than read length.
What generation is it?
Illumina is considered a second-generation sequencing technology because it reads millions of DNA fragments at the same time using amplified clusters and fluorescent signals.
What is the input and preparation?
Input: Synthetic double-stranded DNA fragments (around 100–200 base pairs).
Basic preparation steps:
Add short adapter sequences to the DNA fragments
Amplify the fragments
Load them onto the sequencing machine
How does Illumina read DNA?
The machine copies the DNA one base at a time. Each base (A, T, C, G) produces a specific fluorescent signal.
A camera records the signal after each step, and software converts these signals into a DNA sequence.
What is the output?
The output is a digital file containing:
The DNA sequences
A quality score for each base
In DNA data storage, these sequences are decoded back into binary information using error-correction algorithms.
Why not Nanopore?
Nanopore sequencing can read longer DNA fragments, but it generally has higher error rates. For digital data storage, accuracy is more important than read length, so Illumina is currently more suitable.
Conceptual note
In this case:
DNA is not a gene. It is a storage medium.
Sequencing is not diagnosing life. It is decoding information.
5.2 DNA Write
(i) What DNA would I want to synthesize and why?
I would synthesize a genetic construct enabling melanin production in cellulose-producing bacteria.
More specifically, I would synthesize a codon-optimized version of the tyrosinase gene (tyr1) from Bacillus megaterium, placed inside an expression cassette designed for bacterial hosts.
The goal is to enable engineered Komagataeibacter strains (cellulose-producing bacteria) to produce eumelanin, a dark redox-active polymer, directly within the growing cellulose matrix.
This would allow:
Pigmented living materials
Potentially electroactive cellulose composites
A direct link between gene expression and macroscopic material properties
In this case, DNA is used not only to encode a protein, but to program a material property.
Example genetic construct (simplified expression cassette)
Promoter → RBS → tyr1 CDS → Stop → Terminator
This DNA construct would enable constitutive expression of tyrosinase, leading to melanin formation when L-tyrosine is available.
(ii) What DNA synthesis technology would I use and why?
I would use phosphoramidite-based chemical DNA synthesis, the standard method used by companies like Twist Bioscience.
This is currently the most reliable and scalable way to synthesize custom genes.
Essential steps of DNA synthesis
Chemical synthesis of short oligonucleotides
Assembly of oligos into full-length genes
Error correction and amplification
Cloning into a plasmid vector
Sequence verification
Limitations
Error rates increase with sequence length
GC-rich or repetitive sequences are harder to synthesize
Cost increases with size
Large constructs require assembly from smaller fragments
Scalability
Scaling biological material production is not a simple matter of increasing volume. In living systems, morphology, oxygen gradients, metabolic stress, and contamination risks can fundamentally alter structural outcomes. A construct that works at lab scale does not automatically behave the same way at industrial scale. Understanding small-scale control is therefore a necessary first step before considering scalability.
Despite these limits, gene-scale synthesis (1–3 kb) is highly robust today.
5.3 DNA Edit
My ambition is not maximal intervention, but controlled understanding. Before redesigning entire biosynthetic networks, I prefer mastering one enzymatic layer and observing its material consequences. Editing cellulose crystallinity, secretion pathways, or c-di-GMP regulation may be possible in the future. But at this stage, focusing on a single added function (tyr1 expression) allows a clearer understanding of cause and effect.
(i) What DNA would I want to edit and why?
I would want to edit the genome of a cellulose-producing bacterium (Komagataeibacter) to integrate a functional gene such as tyr1 directly into its chromosome.
Rather than keeping the gene on a plasmid, chromosomal integration would:
Increase genetic stability
Reduce reliance on antibiotic selection
Make the engineered trait more sustainable
More broadly, DNA editing could be used to:
Improve cellulose yield
Modify crystallinity or fiber structure
Control secretion of functional proteins
Enable responsive or patterned material growth
The objective is material programming through living systems such as Komagataeibacter and raises technical and ecological questions regarding containment, stability, and unintended environmental interactions.
(ii) What technology would I use and why?
I would use CRISPR-Cas9 genome editing.
CRISPR allows precise insertion or modification of DNA at specific genomic locations.
How CRISPR edits DNA
Design a guide RNA (gRNA) targeting a specific genomic sequence
Deliver Cas9 protein and guide RNA into the cell
Cas9 creates a double-strand break at the target site
Provide a donor DNA template containing the new gene
The cell repairs the break using the donor template (homology-directed repair)
Required inputs
Guide RNA sequence
Cas9 enzyme (plasmid or ribonucleoprotein complex)
Donor DNA template
Competent cells
Transformation system
Limitations
Editing efficiency may be low in non-model organisms
Off-target edits are possible
Homology-directed repair is not always efficient
Delivery systems can be challenging
Despite these challenges, CRISPR remains the most precise and flexible editing tool currently available.
Conceptual note
DNA writing defines what is possible. DNA editing reshapes what already exists.
Together, they allow us to move from reading biology to programming living materials.
Personnal note
Writing DNA is not merely modifying organisms; it is programming how matter self-organizes over time.
6. Exploration of others strategies for my project
Compare 4 Conductivity Strategies (What They Really Do)
Strategy
Main Effect
What Carries the Signal
Typical Ingredients
What Changes in the Material
Difficulty (1–5)
1) Graphene/CNT in-situ
Strong conduction
Electronic percolation network
CNT, graphene/rGO, dispersion aid
Large drop in resistance, wire-like behavior
2/5
2) PEDOT:PSS / Ag nanowires (post or in-situ)
Strong conduction
Polymer film or metallic network
PEDOT:PSS, AgNWs
High conductivity, less biological emergence
3/5
3) Tyr1 → melanin (bio-made dopant)
Impedance / redox / protonic
Redox polymer + water/ions
Tyr1 + Cu²⁺ + tyrosine (neutral pH development step)
Response depends on hydration and pH; useful for sensing
3/5
4) Curli-like programmable fibers
Structure + hybrid conduction
Protein fibers + mineralization or binding
Curli operon + metal-binding peptides
Programmable and patternable but multi-gene
4/5
Reading this table
If the goal is to create a wire-like conductive material, strategies 1–2 are the most effective.
If the goal is to create a living sensing surface, strategy 3 (and potentially 4 later) is more coherent.
Is Tyr1 Enough for a Tactile Sensor?
What I am actually building with Tyr1 + BC
With Tyr1/melanin integrated into bacterial cellulose, I am not creating a high-performance conductor.
Instead, I am most likely creating an impedance-based sensor.
Two realistic sensing behaviors:
Touch or pressure → impedance change Mechanical compression modifies microstructure and redistributes water and ions. → Impedance changes, especially at specific AC frequencies.
Hydro-tactile response (touch + humidity) BC is highly sensitive to hydration. → Strong signal variation, but humidity must be controlled to avoid false readings.
Does the material need to remain alive?
Not necessarily.
Two modes are possible:
Living mode The material continues to grow and self-organize. Advantage: conceptual strength, biological emergence. Limitation: signal drift over time due to metabolism and morphology changes.
Stabilized mode Washed and partially dried but rehydratable. Advantage: more stable electrical behavior and easier reproducibility.
For experimental validation, a stabilized but responsive material may be more reliable.
Is electrodes + amplifier + ADC sufficient?
Yes — provided that impedance (AC) is measured correctly.
Basic workflow:
Inject a small, known AC excitation signal
Measure amplitude (and ideally phase)
Test at 2–3 different frequencies
Possible hardware:
Dedicated impedance IC (e.g., AD5933 / AD5940)
Or a simple bridge with sinusoidal excitation and amplitude measurement
Electrode configurations:
2-electrode setup Simpler, but more sensitive to contact variability.
4-electrode setup Better separation of excitation and measurement, more stable results.
See what I’ve already done with Bacterial cellulose and electrodes signals :
Minimal validation experiment
To test whether Tyr1 truly contributes:
BC control (no Tyr1)
BC + Tyr1 + Cu²⁺/tyrosine development step
Compare impedance vs pressure and humidity
If Tyr1 produces a measurable difference compared to control, the biological contribution is validated.
Why Copper Matters and Hybrid Strategies
Do I need to add copper?
Yes.
Tyrosinase is copper-dependent. Without sufficient Cu²⁺, enzyme activity drops significantly.
However:
Too little copper → weak melanin synthesis
Too much copper → toxicity and growth inhibition
Copper availability must therefore be optimized, not maximized.
Can other dopants improve impedance?
Yes, but carefully.
Impedance in BC systems is strongly influenced by:
Hydration level
Ionic concentration
Microstructure and porosity
A mild hybrid strategy can be interesting:
Tyr1 provides a biologically synthesized redox layer
I have been playing with reaction–diffusion algorithms for a long time. I keep returning to them out of curiosity — both for their history (their direct link to Alan Turing’s work on emergent patterns) and for what they suggest in terms of morphogenesis.
What interests me is not the naïve idea that “everything reduces” to these equations, but rather the fact that a very simple model can produce rich structures that resemble certain biological patterns. In developmental biology, reaction–diffusion models are often invoked to explain parts of gradient formation, repetition, or textural organization (and, to a limited extent, aspects of differentiation). Of course, real biological systems are far more complex: mechanical constraints, multi-scale signaling, feedback loops, and energetic limitations all play crucial roles.
Precisely for this reason, in the context of this Opentrons experiment, I am interested in translating a dynamic of emergence into a very concrete material gesture — an image composed of discrete deposits, where a continuous phenomenon (reaction and diffusion) becomes a physical field of dots.
fromopentronsimporttypesimportsubprocess,sysimportnumpyasnpmetadata={# see https://docs.opentrons.com/v2/tutorial.html#tutorial-metadata'author':'Vivien','protocolName':'Gray-Scott Reaction-Diffusion Pattern','description':'Generates a reaction-diffusion pattern on an agar plate using the Gray-Scott model, pipetting where the B concentration is high.','source':'HTGAA 2026 Opentrons Lab','apiLevel':'2.20'}################################################################################# Gray-Scott Model Global Parameters and Functions############################################################################### Gray-Scott Model ParametersGS_DA=1.0GS_DB=0.5GS_F=0.014# Feed rate. Experiment with values like 0.035 (spots), 0.014 (worms), 0.062 (moving spots)GS_K=0.045# Kill rate. Experiment with values like 0.065 (spots), 0.045 (worms), 0.061 (moving spots)GS_DT=1.0#1. Mitosis / spots : f=0.0367, k=0.0649#2. Solitions / worms : f=0.030, k=0.062#3. Coral / dense : f=0.0545, k=0.062#4. Vibe : f = 0.029, K = 0.057#5. labyrythn : f = 0.060, K = 0.063deflaplace(Z):"""Weighted 3x3 Laplacian (Karl Sims style) using numpy.roll."""return(-1.0*Z+0.2*(np.roll(Z,1,axis=0)+np.roll(Z,-1,axis=0)+np.roll(Z,1,axis=1)+np.roll(Z,-1,axis=1))+0.05*(np.roll(np.roll(Z,1,axis=0),1,axis=1)+np.roll(np.roll(Z,1,axis=0),-1,axis=1)+np.roll(np.roll(Z,-1,axis=0),1,axis=1)+np.roll(np.roll(Z,-1,axis=0),-1,axis=1)))defsimulate_gray_scott(A,B,num_iterations):"""
Runs the Gray-Scott simulation for a given number of iterations.
Modifies A and B arrays in place.
"""for_inrange(num_iterations):lapA=laplace(A)lapB=laplace(B)reaction=A*B*BA+=(GS_DA*lapA-reaction+GS_F*(1-A))*GS_DTB+=(GS_DB*lapB+reaction-(GS_K+GS_F)*B)*GS_DTreturnA,B# Return A and B for clarity, though they are modified in-place################################################################################# Robot deck setup constants - don't change these##############################################################################TIP_RACK_DECK_SLOT=9COLORS_DECK_SLOT=6AGAR_DECK_SLOT=5PIPETTE_STARTING_TIP_WELL='A1'well_colors={'A1':'Red','B1':'Green','C1':'Orange'}defrun(protocol):################################################################################# Load labware, modules and pipettes############################################################################### Tipstips_20ul=protocol.load_labware('opentrons_96_tiprack_20ul',TIP_RACK_DECK_SLOT,'Opentrons 20uL Tips')# Pipettespipette_20ul=protocol.load_instrument("p20_single_gen2","right",[tips_20ul])# Modulestemperature_module=protocol.load_module('temperature module gen2',COLORS_DECK_SLOT)# Temperature Module Platetemperature_plate=temperature_module.load_labware('opentrons_96_aluminumblock_generic_pcr_strip_200ul','Cold Plate')# Choose where to take the colors fromcolor_plate=temperature_plate# Agar Plateagar_plate=protocol.load_labware('htgaa_agar_plate',AGAR_DECK_SLOT,'Agar Plate')## TA MUST CALIBRATE EACH PLATE!# Get the top-center of the plate, make sure the plate was calibrated before running thiscenter_location=agar_plate['A1'].top()pipette_20ul.starting_tip=tips_20ul.well(PIPETTE_STARTING_TIP_WELL)################################################################################# Patterning#################################################################################### Helper functions for this lab#### pass this e.g. 'Red' and get back a Location which can be passed to aspirate()deflocation_of_color(color_string):forwell,colorinwell_colors.items():ifcolor.lower()==color_string.lower():returncolor_plate[well]raiseValueError(f"No well found with color {color_string}")# For this lab, instead of calling pipette.dispense(1, loc) use this: dispense_and_detach(pipette, 1, loc)defdispense_and_detach(pipette,volume,location):"""
Move laterally 5mm above the plate (to avoid smearing a drop); then drop down to the plate,
dispense, move back up 5mm to detach drop, and stay high to be ready for next lateral move.
5mm because a 4uL drop is 2mm diameter; and a 2deg tilt in the agar pour is >3mm difference across a plate.
"""assert(isinstance(volume,(int,float)))above_location=location.move(types.Point(z=location.point.z+5))# 5mm abovepipette.move_to(above_location)# Go to 5mm above the dispensing locationpipette.dispense(volume,location)# Go straight downwards and dispensepipette.move_to(above_location)# Go straight up to detach drop and stay high###### YOUR CODE HERE to create your design (Gray-Scott Reaction-Diffusion Model)#### Grid size for simulationsize=100# Reduced size to make simulation faster and denser pattern on plateA=np.ones((size,size))B=np.zeros((size,size))# Create a central square seed for Br=5A[size//2-r:size//2+r,size//2-r:size//2+r]=0.5B[size//2-r:size//2+r,size//2-r:size//2+r]=0.25# Add small random noise to the entire grid to introduce variation# This helps break symmetry and encourages diverse pattern formationA+=np.random.rand(size,size)*0.05# Add noise up to 0.05B+=np.random.rand(size,size)*0.05# Add noise up to 0.05B_min=float(B.min());B_max=float(B.max());B_mean=float(B.mean())print("B stats:",B_min,B_mean,B_max)# Ensure values remain within bounds [0, 1] after adding noiseA=np.clip(A,0.0,1.0)B=np.clip(B,0.0,1.0)print("Running Gray-Scott simulation...")# Define the number of simulation iterations hereSIMULATION_ITERATIONS=8000# Initial run for 400 iterationsA,B=simulate_gray_scott(A,B,SIMULATION_ITERATIONS)print(f"Gray-Scott simulation complete after {SIMULATION_ITERATIONS} iterations. Starting patterning.")print(f"To run more iterations, change the 'SIMULATION_ITERATIONS' variable in the code and re-run this cell and the visualization cell below.")# Patterning Parameters for OpentronsPIPETTE_VOLUME=1# 1uL per dotASPIRATE_VOLUME=16# Aspirate up to this much at a timeMAX_DOTS_PER_COLOR=300# Maximum number of dots to dispense per color# Colors to use for B componentsCOLOR_FOR_PRIMARY_PATTERN='Green'# For the 'thick black lines' patternCOLOR_FOR_SECONDARY_PATTERN='Orange'# For other B concentrationsCOLOR_FOR_TERTIARY_PATTERN='Red'# For the highest B concentrations# Thresholds for B values# For 'thick black lines' (Green color)PRIMARY_PATTERN_B_LOWER_THRESHOLD=0.40*B_maxPRIMARY_PATTERN_B_UPPER_THRESHOLD=0.70*B_max# This value will also define the lower bound for Red# For the secondary pattern (Orange color)# This will catch 'B' values that are above this, but not in the primary or tertiary pattern bandsSECONDARY_PATTERN_B_THRESHOLD=0.20*B_max# Adjust this percentage as neededprint("Using Red Pattern (Highest B) threshold: B >=",PRIMARY_PATTERN_B_UPPER_THRESHOLD)print("Using Green Pattern (Mid B - 'thick lines') B thresholds: (",PRIMARY_PATTERN_B_LOWER_THRESHOLD,",",PRIMARY_PATTERN_B_UPPER_THRESHOLD,")")print("Using Orange Pattern (Lower B) threshold: (",SECONDARY_PATTERN_B_THRESHOLD,",",PRIMARY_PATTERN_B_LOWER_THRESHOLD,"]")# Agar plate dimensions (estimated for a standard 90mm round agar plate,# patterning in a 70x70mm square area roughly) to fit the patternPATTERN_AREA_WIDTH_MM=55# Reduced from 85mm to fit within 40mm radius safe areaPATTERN_AREA_HEIGHT_MM=55# Reduced from 85mm to fit within 40mm radius safe area# Calculate scaling factor to map grid coordinates to millimeters on the platescale_x=PATTERN_AREA_WIDTH_MM/sizescale_y=PATTERN_AREA_HEIGHT_MM/size# Add sampling step for clearer dotsSAMPLING_STEP=4# Pipette every Nth pixel to create distinct dotsDOT_SPACING_MM=SAMPLING_STEP*scale_x# Actual physical spacing between dot centersprint(f"Desired dot spacing for distinct patterns: {DOT_SPACING_MM:.2f} mm (approx. 2mm drop diameter).")pipetted_count_primary=0# Greenpipetted_count_secondary=0# Orangepipetted_count_tertiary=0# Red# --- Pipetting for Tertiary Pattern (Red component - highest B concentration) ---pipette_20ul.pick_up_tip()pipette_20ul.aspirate(ASPIRATE_VOLUME,location_of_color(COLOR_FOR_TERTIARY_PATTERN))current_pipette_volume=ASPIRATE_VOLUMEprint(f"Starting patterning for Tertiary Pattern with {COLOR_FOR_TERTIARY_PATTERN}.")foryinrange(0,size,SAMPLING_STEP):forxinrange(0,size,SAMPLING_STEP):ifB[y,x]>=PRIMARY_PATTERN_B_UPPER_THRESHOLD:ifcurrent_pipette_volume<PIPETTE_VOLUME:pipette_20ul.aspirate(ASPIRATE_VOLUME,location_of_color(COLOR_FOR_TERTIARY_PATTERN))current_pipette_volume+=ASPIRATE_VOLUMEx_offset_mm=(x-size/2)*scale_xy_offset_mm=(y-size/2)*scale_yadjusted_location=center_location.move(types.Point(x=x_offset_mm,y=y_offset_mm))dispense_and_detach(pipette_20ul,PIPETTE_VOLUME,adjusted_location)current_pipette_volume-=PIPETTE_VOLUMEpipetted_count_tertiary+=1ifpipetted_count_tertiary>=MAX_DOTS_PER_COLOR:breakifpipetted_count_tertiary>=MAX_DOTS_PER_COLOR:breakpipette_20ul.drop_tip()print(f"Total {pipetted_count_tertiary} drops pipetted for Tertiary Pattern using {COLOR_FOR_TERTIARY_PATTERN}.")# --- Pipetting for Primary Pattern (Green component - 'thick lines') ---pipette_20ul.pick_up_tip()pipette_20ul.aspirate(ASPIRATE_VOLUME,location_of_color(COLOR_FOR_PRIMARY_PATTERN))current_pipette_volume=ASPIRATE_VOLUMEprint(f"Starting patterning for Primary Pattern with {COLOR_FOR_PRIMARY_PATTERN}.")foryinrange(0,size,SAMPLING_STEP):forxinrange(0,size,SAMPLING_STEP):if(B[y,x]>PRIMARY_PATTERN_B_LOWER_THRESHOLD)and(B[y,x]<PRIMARY_PATTERN_B_UPPER_THRESHOLD):ifcurrent_pipette_volume<PIPETTE_VOLUME:pipette_20ul.aspirate(ASPIRATE_VOLUME,location_of_color(COLOR_FOR_PRIMARY_PATTERN))current_pipette_volume+=ASPIRATE_VOLUMEx_offset_mm=(x-size/2)*scale_xy_offset_mm=(y-size/2)*scale_yadjusted_location=center_location.move(types.Point(x=x_offset_mm,y=y_offset_mm))dispense_and_detach(pipette_20ul,PIPETTE_VOLUME,adjusted_location)current_pipette_volume-=PIPETTE_VOLUMEpipetted_count_primary+=1ifpipetted_count_primary>=MAX_DOTS_PER_COLOR:breakifpipetted_count_primary>=MAX_DOTS_PER_COLOR:breakpipette_20ul.drop_tip()print(f"Total {pipetted_count_primary} drops pipetted for Primary Pattern using {COLOR_FOR_PRIMARY_PATTERN}.")# --- Pipetting for Secondary Pattern (Orange component) ---pipette_20ul.pick_up_tip()pipette_20ul.aspirate(ASPIRATE_VOLUME,location_of_color(COLOR_FOR_SECONDARY_PATTERN))current_pipette_volume=ASPIRATE_VOLUMEprint(f"Starting patterning for Secondary Pattern with {COLOR_FOR_SECONDARY_PATTERN}.")foryinrange(0,size,SAMPLING_STEP):forxinrange(0,size,SAMPLING_STEP):# Dispense Orange if B is above its threshold, AND NOT in the Green or Red bandif(B[y,x]>SECONDARY_PATTERN_B_THRESHOLD)and(B[y,x]<=PRIMARY_PATTERN_B_LOWER_THRESHOLD):ifcurrent_pipette_volume<PIPETTE_VOLUME:pipette_20ul.aspirate(ASPIRATE_VOLUME,location_of_color(COLOR_FOR_SECONDARY_PATTERN))current_pipette_volume+=ASPIRATE_VOLUMEx_offset_mm=(x-size/2)*scale_xy_offset_mm=(y-size/2)*scale_yadjusted_location=center_location.move(types.Point(x=x_offset_mm,y=y_offset_mm))dispense_and_detach(pipette_20ul,PIPETTE_VOLUME,adjusted_location)current_pipette_volume-=PIPETTE_VOLUMEpipetted_count_secondary+=1ifpipetted_count_secondary>=MAX_DOTS_PER_COLOR:breakifpipetted_count_secondary>=MAX_DOTS_PER_COLOR:breakpipette_20ul.drop_tip()print(f"Total {pipetted_count_secondary} drops pipetted for Secondary Pattern using {COLOR_FOR_SECONDARY_PATTERN}.")
# Execute Simulation / Visualization -- don't change this code block
protocol = OpentronsMock(well_colors)
run(protocol)
protocol.visualize()
B stats: 1.1769677182305039e-07 0.02738330028607403 0.2993330786497384
Running Gray-Scott simulation...
Gray-Scott simulation complete after 8000 iterations. Starting patterning.
To run more iterations, change the 'SIMULATION_ITERATIONS' variable in the code and re-run this cell and the visualization cell below.
Using Red Pattern (Highest B) threshold: B >= 0.20953315505481687
Using Green Pattern (Mid B - 'thick lines') B thresholds: ( 0.11973323145989537 , 0.20953315505481687 )
Using Orange Pattern (Lower B) threshold: ( 0.059866615729947684 , 0.11973323145989537 ]
Desired dot spacing for distinct patterns: 2.20 mm (approx. 2mm drop diameter).
Starting patterning for Tertiary Pattern with Red.
Total 0 drops pipetted for Tertiary Pattern using Red.
Starting patterning for Primary Pattern with Green.
Total 0 drops pipetted for Primary Pattern using Green.
Starting patterning for Secondary Pattern with Orange.
Total 0 drops pipetted for Secondary Pattern using Orange.
=== VOLUME TOTALS BY COLOR ===
Orange: aspirated 16 dispensed 0 ##### WASTING BIO-INK : more aspirated than dispensed!
Green: aspirated 16 dispensed 0 ##### WASTING BIO-INK : more aspirated than dispensed!
Red: aspirated 16 dispensed 0 ##### WASTING BIO-INK : more aspirated than dispensed!
[all colors]: [aspirated 48] [dispensed 0]
=== TIP COUNT ===
Used 3 tip(s) (ideally exactly one per unique color)
```
I used Gemini (2.5) to help translate a Gray-Scott reaction–diffusion model into a stable Opentrons protocol and to choose a robust rendering strategy (iso-contour band → dot sampling) that produces reliable aesthetic output under time/volume constraints.
Post-Lab Questions
Find and describe a published paper that utilizes the Opentrons or an automation tool to achieve novel biological applications.
Published Example: Automation in Synthetic Biology
Write a description about what you intend to do with automation tools for your final project.
Automation as a Tool to Explore the Behavioral Landscape of Living Materials
Automation tools such as Opentrons have been widely used for:
High-throughput DNA assembly
CRISPR editing workflows
Combinatorial genetic library screening
Automated protein expression testing
These systems increase reproducibility, reduce human variability, and enable scalable experimentation. In most cases, automation serves to optimize molecular workflows and accelerate genetic engineering cycles.
However, my interest in automation is different.
Moving Beyond Optimization
In classical bioprocess engineering, automation is used to:
Optimize growth conditions
Reduce experimental noise
Standardize reproducibility
Improve yield
But living materials — such as bacterial cellulose biofilms — do not behave like linear industrial systems.
They exhibit:
Non-linear responses
Narrow stability windows
Emergent morphologies
Phase transitions under small parameter shifts
Traditional experimental design (e.g., Taguchi matrices) assumes relatively smooth and predictable response surfaces. In living systems, this assumption often fails. Small changes in pH, oxygen availability, carbon source, or metal ions can lead to abrupt structural transitions.
Automation, in this context, is not merely an efficiency tool.
It becomes a way to systematically explore instability and emergence.
My Intended Use of Automation
1. Mapping Morphogenetic Regimes of Bacterial Cellulose
Instead of optimizing for maximum growth, I aim to use automation to:
The objective is to map how living cellulose changes:
Thickness
Porosity
Impedance
Mechanical anisotropy
Conductive behavior
This transforms automation into a cartographic tool:
Not optimizing yield, but mapping the behavioral topology of a living material.
2. Spatial Programming of Living Matter
A more ambitious direction is to use liquid handling automation to:
Deposit gradients of dopants
Create patterned functional zones
Introduce local conductivity modulation
Encode anisotropy into growing pellicles
Instead of post-processing materials (e.g., adding graphene or PEDOT in situ), this approach attempts to let functionality emerge during growth.
Automation allows spatial control.
Living matter performs the structuration.
3. Toward a Cybernetic Living Material System
A future extension would integrate measurement and feedback:
Grow bacterial cellulose
Measure impedance or electrical response
Adjust copper or nutrient concentration automatically
Iterate
This creates a cybernetic loop:
Living material → Measurement → Algorithmic adjustment → Modified growth
Automation becomes a mediator between biological behavior and computational control.
Why This Matters
This project shifts the role of automation from:
Eliminating biological variability to:
Engaging systematically with biological variability.
Rather than forcing the living system into industrial predictability, automation is used to:
Detect bifurcations
Explore phase transitions
Reveal hidden regimes
Enable programmable morphogenesis
In this sense, automation becomes a bridge between:
Synthetic biology
Biofabrication
Morphogenesis
Cybernetic design
It allows living matter to be explored not as a static substrate, but as a dynamic, programmable system.