

Large language models can generate defensive code, but if you’ve never written defensively yourself and you learn to program primarily with AI assistance, your software will probably remain fragile.
This is the thesis of this argument, and it’s completely unfounded. “AI can’t create antifragile code” Why not? Effective tests and debug time checks, at this point, come straight from claude without me even prompting for it. Even if you are rolling the code yourself, you can use AI to throw a hundred prompts at it asking “does this make sense? are there any flaws here? what remains untested or out of scope that I’m not considering?” like a juiced up static analyzer






I don’t even necessarily disagree, but how do you say the exact name of the fallacy you are invoking without seeing the problem in what you’re saying?
There can be clear start and stop points. Why would this ever lead to regional managers as you describe? Why would it ever lead to people you simply disagree with? To argue in good faith, you need to take the point as it stands, clearly stopping at a level of someone who is “responsible for far more death.” That is the argument that the above commenter posted, and there’s not a good reason to extend that any further.
Now, I’m going to step away from the context of homicide, but this is at a base an incredibly gullible point. Virtually every civil rights movement has been accomplished through breaking laws, called civil disobedience.
“an individual who breaks a law that conscience tells him is unjust, and who willingly accepts the penalty of imprisonment in order to arouse the conscience of the community over its injustice, is in reality expressing the highest respect for law.” - MLK