I'm working at the intersection of engineering, data science, and creativity to the end of serving people.
Currently, I'm exploring the world of adversarial models, AI safety and alignment, and technical governance (focusing on tech, energy and renewables, and biotech policy). Previously, I was a process aligned engineering data scientist at the plasma pyrolysis company Monolith. Prior to that I got my B.S. in Chemical and Biological Engineering from CU Boulder and did my thesis on experimental and computational microfluidics (facilitated in part by computer vision) under the advisorship of Dr. Gesse Roure and Prof. Rob Davis.
Open-Source, Community-Deployed Microfluidic & Smartphone Imaging Platform (Decentralized Health Science) Table of Contents Biological engineering tool Governance and policy goals Governance actions Governance actions scoring matrix Prioritization recommendation Week 2 Lecture Prep 1. Biological engineering tool Motivation My senior year of high school, my sister got diagnosed with diabetes and I switched my prospective major from CS to Chemical and Biological Engineering. Around the same time, I watched a TED talk by Dr. Manu Prakash on his paper-fuge: an accessible, affordable, and hand-powered centrifuge. The device was made from paper and string, but could separate blood into components comparably with how multi-thousand dollar equipment do. Having grown up in a State Department family, moving around the global south (and in close proximity to USAID) I saw how solutions and processes taken for granted in the west are greatly inaccessible to a vast majority of the world; and additionally the strength of local and specific solutions to technical problems (the Indian concept of “jugaad”: frugal innovation).
My senior year of high school, my sister got diagnosed with diabetes and I switched my prospective major from CS to Chemical and Biological Engineering. Around the same time, I watched a TED talk by Dr. Manu Prakash on his paper-fuge: an accessible, affordable, and hand-powered centrifuge. The device was made from paper and string, but could separate blood into components comparably with how multi-thousand dollar equipment do. Having grown up in a State Department family, moving around the global south (and in close proximity to USAID) I saw how solutions and processes taken for granted in the west are greatly inaccessible to a vast majority of the world; and additionally the strength of local and specific solutions to technical problems (the Indian concept of “jugaad”: frugal innovation).
My technical background thus far has it’s hands in the buckets of microfluidics, imaging, bioengineering, data science, and high performance computation. My senior thesis in multiphase microfuidics studied microdroplet deformation as the droplets traveled down a straight channel. I wrote FORTRAN simulations, and to compare it with a practical experiment I built a dimensionally scaled acrylic flow cell to observe the deformation with imaging software I wrote to use my phone camera (leveraging information from a class I took on quantitative optical imaging). I’m quite terrible at wet-lab protocols but having devices like flow cells made the process much more accessible for me (a computationalist). Our research could be applied to single cell diagnoses, microreactors, and point of care assays. My senior design project for my degree in bioengineering involved designing a lentiviral vector production platform for CAR-T cell therapy. An extremely personalized medicine, but inaccessible due to high cost and highly centralized, regulated manufacturing.
The idea I’m tossing around generalizes concepts I’ve encountered in each of these different areas I’ve experienced so far.
The Platform: an Arduino for the Wet Lab
Layer
Description
Notes
Fabrication
Microfluidic Chips
3D resin-printed, open-source designs (potentially on a platform). Different geometries for diagnostics, micro-reactors, dye synthesis, fermentation, cell culture, etc. Researcher and Community provided templates
Imaging
Phone Camera + Magnification add-on
A universally available and reasonably precise optical sensor. With AI-powered imaging algorithms (e.g. segmentation, undistortion, etc.) “ballpark” imaging can give a lot of information over having 0 access to any analysis
Analysis
Node-based visual processing on any computer
Inspired by ComfyUI, TouchDesigner, Blender nodes, DaVinci Resolve Fusion. Not everyone knows how to code, so users build pipelines by connecting visual blocks (but you can code if you want finer control on the analysis). The platform with the cell designs also can hold AI models provided by researchers; workflows are shareable, auditable, and forkable.
Last year, USAID was shut down shuttering a huge chunk of global humanitarian funding. Last month (Jan 22, 2026) the US formally withdrew from the WHO. A lot of the world depends on aid funding and support for health endeavors, and these local political occurrences in the US have exposing ramifications worldwide. Science is going to advance regardless, but access doesn’t seem to be matching pace.
The goal here isn’t to decentralize fundamental research, but rather decentralizing the ability to apply the research: to deploy, test, adapt, iterate, and manufacture. Fundamentally, this platform should serve to educate and empower at the community level. Though the results may not be as precise as could be obtained with fancy equipment, a ballpark/order of magnitude estimate always beats 0 information.
2. Governance and policy goals
Goal A: Equity Access and Education
The individual and decentralized nature easily melds into the existing workflow of biohackers in the US, but it is essential that the platform reaches underserved communities who lack the infrastructure.
A1. Serve as a platform to allow DIY-ers around the globe (who may not have access to all that bio-hackers in the US enjoy) to develop their bioengineering chops. Minimize cost and technical barriers to fabrication, imaging, and analysis.
A2. Ensure knowledge transfer and education, tools without understanding just recreates dependency (e.g. building a well in South Africa and leaving vs. training the community on how to build and maintain wells).
Goal B: Safety Without Gatekeeping
Prevent harm without recreating the centralized approval bottleneck that blocks access and imposes non-universal standards universally. Culturally, norms in the US are different from those in India, different from those in Korea, etc. Aside from basic universal safety, safety involves nuance in the context of the application, and the community.
B1. Users understand the confidence level and limitations of their results using the platform.
B2. The tool itself clearly communicates unreliable outputs; safety is embedded in design not just tacked on at the end.
Goal C: Community Sovereignty
Who deserves to decide what’s “cautious enough”?
C1. Local communities decide what applications, and standards matter for their context. Embedded in the platform is a system for local/community governance to determine and regulate standards
C2. Governance doesn’t require permission from external institutions, remains transparent (infrastructurally), can factor in cultural and geographical nuance
Technical platform/community norms. Open-source (think homebrew or iGEM)
Policy change: national regulators
Technical strategy, platform developers + bioengineering researchers + open-source community
Purpose
• Microfluidic devices are super-common in bioengineering labs, but unlike the iGEM registry for biological components or GitHub for code, there’s no platform or standardized way to share microfluidic designs for community use. Since the processing workflow is node-based (like touchdesigner) it improves the clarity of processing by orders of magnitude over seeing huge blocks of code out-of-order.
• All diagnostics face the same approval pipeline, this would create a new “community screening” tool with lower barrier than formal diagnostics but with mandatory labeling for transparency. (e.g. home pregnancy test vs clinical blood panel) • Priority is to shift the framing from “is this analysis as accurate as possible” to “does the user understand what’s happening with their health?”
• Need results to have reliability indication for transparency: this builds confidence scoring and image quality into the nodes themselves to flag problems.
Design
Community-driven repository with wikipedia/open-source style collective maintenance. Paired training datasets could be built to improve AI models (e.g. images that are lab grade compared to phone camera images). This platform can hold chip designs, node-based imaging workflows, novel image processing blocks, AI models, and protocols shared with a way for the community to have discourse around it (to give feedback, ratings, safety info, etc.).
Would involve regulatory action. Perhaps a pilot program with mandatory labelling of screening vs diagnosis, confidence levels, recommended follow-ups. International adoption would likely come from a governing health body like the WHO (even though the US left) or could be implemented by regional or local health bodies.
Paired and living (actively updated) datasets built collaboratively by research labs and the community. Models shared on Hugging Face, can be imported as nodes. On-device or local inference (so that wifi is not a strict necessity for processing). Safety is built in as visible errors in the node-graph adding to the auditability of the safety of the processing pipeline
Assumptions
• Enough people will want to actively contribute to maintain quality • community review rigorously flags dangerous or poor-performing designs. • Node-based interface is accessible enough for non-technical users
• Assuming that regulators don’t see this as “lowering standards” but rather as creating a new specific category. • Assuming the political will exists to prioritize access and safety (not much evidence right now for this)
• Users will pay heed to confidence warnings • Collaborative datasets include a diversity of phone models, lighting, etc. • Confidence calibration is accurate enough to be genuinely useful
Risks of Failure & “Success”
Failure: • Community repo stagnates or quality falls off a cliff Success: • Community platform dominated by western biohackers, marginalizing the communities it’s meant to serve
Failure: • Rejected by regulators, tool is in legal limbo Success: • Could potentially codify a two-tier system (similar to World Bank interest rates on loans to european vs african countries) where wealthy countries get “real” diagnostics, the global south is stuck with “screenings”
Failure: • Poorly calibrated models essentially just making up analyses with false confidence Success: • over-reliance on the confidence metric, and conflating model confidence for the screening as a formal diagnosis
4. Governance Actions Scoring Matrix
Does the option:
Option 1
Option 2
Option 3
Enhance Biosecurity
• By preventing incidents
2
1
1
• By helping respond
1
2
2
Foster Lab Safety
• By preventing incident
2
1
1
• By helping respond
1
2
1
Protect the environment
• By preventing incidents
2
2
1
• By helping respond
1
2
2
Other considerations
• Minimizing costs and burdens to stakeholders
1
3
2
• Feasibility?
2
3
2
• Not impede research
1
2
1
• Promote constructive applications
1
2
2
Week 2 Lecture Prep
Homework Questions from Professor Jacobson:
Nature’s machinery for copying DNA is called polymerase. What is the error rate of polymerase? How does this compare to the length of the human genome. How does biology deal with that discrepancy?
Polymerase has about 1 error every 106 base incorporations. The human genome is about 3 billion (3 \times 109) base pairs, which would lead to an order of 10^3 errors during replication of the genome. This would be catastrophic for fine tuned machines like biological systems, so there are several biochemical error correction methods that biology provides. During DNA synthesis several enzymes (DNA polymerase included) have live proofreading functionality that can “backspace” errors by reversing direction while writing out sequences in nucleotides. After the DNA strand is synthesized, mismatch repair (nucleotide excision repair) can chop out regions with an incorrect nucleotide and polymerase can come back in to do its job.
How many different ways are there to code (DNA nucleotide code) for an average human protein? In practice what are some of the reasons that all of these different codes don’t work to code for the protein of interest?
Take a look at a codon chart and you’ll see that multiple base sequences can create the same amino acid, thus in a protein that can contains hundreds of amino acids, there’s an exponentially growing combinatorial tree of codes. However, in practice the minutia of the kinetics controlling these processes restrict the actual sequences that can create proteins. As amino acids are appended to the sequence, secondary structure (alpha helices, beta sheets) arise from steric effects. These secondary structures link directly to the functional geometric configuration of the protein, and certain sequences may form structures with steric effects on geometry that negatively impacts functionality.
Homework Questions from Dr. LeProust:
What’s the most commonly used method for oligo synthesis currently?
Phosphoramidite DNA Synthesis
Why is it difficult to make oligos longer than 200nt via direct synthesis?
If each additional nucleotide appended onto the chain has an error rate, each additional nucleotide multiplies the previous error rate with its own error rate. As the length of the chain incrases, the error rate increases exponentially, drastically reducing the amount of correct full-length product you’re left with.
Why can’t you make a 2000bp gene via direct oligo synthesis?
Let’s take an example error rate: a coupling efficiency of 99.5% (new bases are tacked on correctly 99.5% of the time), you do this for 2000 base pairs, and you get 0.995^2000=0.004%. The errors scale so fast you essentially wouldn’t get any properly made product. It’s more effective to make smaller chains up to 200nt in length, and splice them together.
Homework Questions from Prof. Church:
[Using Google & Prof. Church’s slide #4] What are the 10 essential amino acids in all animals and how does this affect your view of the “Lysine Contingency”?
The 10 essential amino acids in animals (that we are unable to synthesize, and thus must obtain through diet) are PVT TIM HALL:
Phenylalanine (Phe, F)
Valine (Val)
Threonine (Thr, T)
Tryptophan (Trp, W)
Isoleucine (Ile, I)
Methionine (Met, M)
Histidine (His, H)
Arginine (Arg, R)
Leucine (Leu, L)
Lysine (Lys, K)
The dinosaurs in Jurassic park were engineered to be unable to produce lysine as a safety, but the thing is they couldn’t produce it to begin with, and got it through diet! So long as somewhere in the food chain a lysine consuming organism (say some herbivore) exists in the environment this is not a containment strategy at all as the dinosaur could simply eat this organism to get lysine. However, as noted on the slide NSAAs (non-standard amino acids) exist that don’t exist in nature. If a non-standard amino acid was incorporated into the dinosaur’s essential proteins then that could be an effective method for containment. The idea of engineering metabolic dependency as a way to structurally impose biosafety is a great idea, but it needs to be thought out correctly!
Prompts Used:
“Take this obsidian markdown and help me reformat it to the markdown format of the hw assignment. Ask any questions to help me fill out missing gaps in my ideas” (attached _index.md from hw template)
Fix the links in the table of contents
Week 2 HW: DNA Read Write and Edit
Currently moving from Denver to India, will get my work up as soon as I get wifi access out of the airport