With tales of Crispr babies in the news, I've been pondering the implications of programming human beings. This may or may not have been spurred by a crime novel I recently read, which hinges on this topic. Were I offered a job to edit the genome of humans, would I take it?
I understand the question was easier in China where the morality issue is less problematic. The overt goal doesn't sound terrible either: guaranteeing the dependents of over a billion people will not be born with genetically inherited diseases. Who wouldn't want that? Though, our ethics boards are not likely inclined to approve experimentation on humans -- at least not until we see China racing ahead.
For the sake of the thought experiment, let's assume the ethics boards have granted permission for this work to go forward, and I've been hired as a programmer. How does it start?
As a good programmer, I start with a user story, something that describes the people involved and what they'd like to accomplish with the software. When I get back from a consulting meeting in Beijing, I draw up this user story:
Xi, the wholly elected leader of a prosperous, populous nation, is facing challenging stability decisions. He has great concern for the citizens and does not sleep easy knowing they are suffering. He's worried about people succumbing to addiction -- drugs, alcohol,
individualism. He's successfully launched monitoring programs to identify these people and alert neighbours to their plight. Now he wishes to go further. He's looking for a genetic solution that would eradicate addiction.
Great, that sounds like a noble cause. With this insight, I can now recommend a variety of possible solutions. Naturally, I start with the latest version of the CRISPR gene editing technology.
I install the toolkit.
Immediately, I'm not impressed. The documentation is a mess. It's outdated, and most of the API is just blank. StackOverflow is filled with questions about programs randomly crashing, and smug answers belittling the poster. I'll have to poke around blindly looking for something that works.
You laugh, but this is the true state of affairs for the software that runs your phones, cars, medical devices, and military hardware. Do we really expect that we'd approach human programming more rigorously? We can't stop the development of technology just because we haven't figured out programming yet. We also can't use the argument, "but these are people!" That holds little weight to the decisions made by giants like Facebook and Google -- who essentially already control our lives through the software they write.
Alright, I have some code.
I can't just deploy this. I need to test it. I wonder how this will even work. Do we have some emulator? Maybe, but it looks buggy. I see there's an offer from India to outsource my testing. They've got a web form and ability to upload the code. I'm best off not thinking about what happens on the backend. As long as I get my results, it's their concern, not mine.
I pay the contractors for the Indian testing. I've given them direct access to my issue system so they can file any defects they find.
Issue #18: Results in random clucking like a chicken > CLOSED: not reproducible > COMMENT: Second case of clucking > REOPENED: Confirmed > SEVERITY: Cosmetic, PRIORITY: Low
Given the randomness of the API, I'd expect mostly wacky results, but some might be promising.
Issue #37: Side-effect: blocks cerebral palsy progression > ASSIGNED: George > COMMENT: Identified the issue, working on a patch > FIXED: Removed unintended side-effects
Perhaps such valuable side-effects don't come up too often in programming. But all those medical researchers out there might wish to share their opinions on the amount of research that is buried, destroyed, and/or locked behind paywalls. If I'm on a deadline, or trying to save face, I'm likely to keep my head down and focus on getting the job-at-hand done.
This demands the question: what level of completeness is okay? It's currently impossible to set a deadline and a planned feature set in software. This isn't a lack of planning ability, it's a fundamental uncertainty in how the profession works. There's no reason to assume human programming will be any different.
What defects are we willing to accept when it comes to gene editing? If the program is to cure cerebral palsy, I imagine random clucking would be acceptable. But would moral judgements even allow the discussion of side-effects? There are some correlations between high IQ and other neurological disorders. Is it acceptable to edit genes that make a trade for higher IQ, at the risk of other genetic disorders?
If I think about the self-driving car discussions, we have a fundamental lack of knowledge to deal with morality and software. And the repercussions will be played out at a high political level, it might never even involve the programmer -- despite them being essential to the answer.
Nonetheless, while the debate plays out, I continue coding. I won't hold off on releasing the code. Until I'm explicitly told not to, I continue with my job. Plus, I feel safe. It's not like I'll ever be held personally responsible for the defects. Nobody is held liable for any kind of defects now. Consider mass data breaches; how severely are those companies punished? Even in the medical world, there's a litany of drugs with questionable side effects, yet those pharma concerns are still around.
I may be painting a bleak picture of gene editing. There are all sorts of positive uses for it, including the ability to eradicate genetically inherited diseases. But, realistically, how do we answer the questions about testing and defects? Should we argue whether this is even programming? It'd be hard not to call it that since we legitimately have a paradigm called "generative programming" which has been used for over half a century.
Nothing but questions.
How about you? Would you accept a position in human programming?