DEV Community

Marianne
Marianne

Posted on

Certainty is a Programming Bug (featuring Hillel Wayne)

Summary

What kind of programming language is Marianne trying to write? Before we go any deeper into the guts of language design, Marianne and friend Hillel Wayne debate the shortcomings of various approaches to specifying and modeling program behavior. From first order logic verification to system visualizations, nothing Marianne has used before has quite fit the bill. She's beginning to get philosophical about the nature of abstraction and wants to rebel from the goal of certainty.

Links

Bonus Context

Become a patron to support this project.

Transcript

MB: The first thing every guest has asked at the beginning of every interview is: What exactly are you trying to do?

MB: So I figure before we get into the details of how to do it, I should spend some time answering that question in detail. Talking with me today is my friend Hillel Wayne. I met Hillel a few years ago at one my all time favorite tech conferences, Strange Loop. Hillel is… well, I'll him introduce himself:

HW: My name is Shlomo Hillel Wayne, but I go by Hillel Wayne, almost exclusively, I am an ex-physics major, ex-web dev, now I teach formal methods--the science of writing software blueprints in a way that can be verified directly as opposed to after you build the system--full time. I also do a lot of writing on tech, tech culture, tech history and sort of the journalism and sort of culture of tech. So I sort of at this point, my job is mostly a mix of teacher, historian, journalist and writer.

MB: The science of writing software blueprints. I like the way he puts that. But there's been something that's been bugging me about formal specification for a while now that fuels this project.

MB: How to keep people from misusing them?

MB: People gravitate towards models, much like statistics, because they feel like it reflects objective truth. If you're looking to build models because you want help reasoning about the world, your system, and decisions we face with both, then just about any modeling language will do. Python will do.

MB: But that's not what a lot of people look for when they model things. They don't want an abstraction, they want a guarantee. For that reason, formal specification stays on the fringes of software engineering. Unit tests feel like they offer a guarantee, models of software systems - on the other hand--can run correctly as specs and then be subsequently built very differently from the model. Or the spec itself can miss some critical factors and proving it just wastes a whole bunch of time.

MB: My interest isn't in offering that guarantee, but in creating a value add that diminishes the desire for a guarantee in the first place. I want to have a modeling language where the whole purpose of writing a spec is to generate a failure. Or this is the way I explained it to Hillel:

MB: What I'm really interested in is system resilience and this idea that systems have a certain amount of complexity and they have a certain amount of coupling and that it's not about optimizing for minimum complexity and minimum coupling, it's about having the right amount in the right places for the system. And so being able to sort of spec out a system so they could see where it's complex and where it is coupled, and then look at how much leeway you are giving yourself on on your inputs before the system produces an output that is not what you want. Right?

MB: So this was sort of the reason why I gravitated towards formal specification, because it seemed to have those those dynamics. And then it took me a long time to figure out 'Oh, no. The reason why its frustrating is because this language is built on the idea of mathematical correctness. On this state never being true. And I actually don't care if it's ever true.' I just want to know what are the parameters like, when will it get to that state? Like how brittle is it? And if we change it in this way, does it become more brittle or less brittle? Right?

MB: So that's sort of--there are languages that do this, but they're by and large proprietary or they're based entirely on being able to visualize systems.

MB: Wait..Wait. We're throwing around a couple of different terms that may be very foreign to you…

MB: So now the first thing that I have always wanted to know is formal specification, formal methods, formal verification, are all these terms synonyms or is there like an umbrella hierarchy between them or....?

HW: Sort of, and this is where it would have been really nice if I managed to-- I just recently moved to a new place, to get my whiteboard up and running so I could draw the little charts for you. But--

MB: This is a podcast, those little charts wouldn't help anybody.

HW: Well, it would help you understand.

HW: So one thing to keep in mind is that, like sort of any thing involved in tech, there's a lot of communities doing a lot of different things in the space... at different times. So you often see terms that sort of bubble up that certain groups use, that other groups use a different term and then sort of they get smushed together in weird ways. This is a highly broad overview, incredibly inaccurate, but it's a good enough lie to go on for now. Formal methods is just the broad field of anything involved that is called formal methods. Formal specification is the art of writing rigorous descriptions of what something should do, either what a designer should do or what specific code should do. So you write a formal specification of a design using say, TLA+ or you write a formal specification of code using, I don't know, Spark or like Idris. Then from formal verification is the art of taking the thing and taking the specification and showing they're connected.

MB: When he says "showing they're connected" the typical way to do that is to write a spec that is parsable and can be compiled into an executable model using First Order Logic.

MB: I've written about First Order Logic before, see my blog post The Subtle Power of Booleans to get up to speed, but basically you can think of First Order Logic as a puzzle using true/false statements. If A is true, and B is true, then C must be false - that sort of thing. The way this works in formal specification is that the spec defines what the system is doing and a set of assertions about states that should be impossible. So for example: here is how my system validates and approves transfers of money between accounts, check that there is no order of events that results in a negative account balance.

MB: If this sounds super impressive, I should point out that I'm actually really bad at it.

MB: ...No really, part of the reason Hillel and I became friends is because he spent so much time helping me fix silly problems with my specs when I started.

MB: I have a really hard time thinking of logic as being math. I'm exposed to it as a math concept. And then when I dig into it, I think to myself, this is kind of more like... I don't-- I don't know-- it's not fair to call it philosophy, but like it doesn't feel like math to me. And I'm wondering if I'm missing something there.

HW: Can you explain why it doesn't feel like math to you?

MB: I think because... I tend to see it as being about relationships, not about numbers, right?

MB: You know, like you have a set of certain things, do they all have the same properties, right?

HW: Are are you familiar Seven Bridges of Konigsberg?

MB: I am familiar with Seven Bridges of Konigsberg?

HW: Is that math?

MB: Uhhh...No?

HW: Ok, It's category theory math.

MB: What is Category theory?

HW: Ok, cool, we can skip that.

MB: (laughs)

HW: So this is something that I actually am very adamant because I think that there's this really weird divide in like programming where you have basically two camps essentially. You have the people who are like, we don't need math. Math is like pointless unless you work in specialist domains. The other people are like math is beautiful and shows you the underlying, like abstractions of programming and how everything is secretly a type and then how there's like-- blah, blah, blah, blah, blah. And I think that there's the third camp, which is the camp that I belong to, which is that math is directly useful to programmers in day to day life. But the problem is that the math that is sort of presented as math by both camps isn't is a very small slice of what math is, because math is fundamentally a language we've developed to explain things. And almost everything in math comes from of two places, either trying to build tools to model reality better or to build tools to model the tools that we just built better.

MB: Yeah. So I think it's less we don't do we need math or how much math to do you need to be a programmer and more... There is this cultural and psychological and emotional context around math as a subject. Right?

MB: So what you're saying before about how some people find the specifying of design unsatisfying because it doesn't connect to the code. Right. Is the design process. I think that's because people take a false sense of security in math. Right. They think like, well, it's math, ergo it's the truth. There are no opinions there. There is no subjectivity at all. It's math. Ergo, I don't have to defend myself.

HW: --I think it's... I think it's more just that you can't automatically translate it into code. You have to do that manually and then you can make mistakes.

MB: True. But like that that that goes back to it. Like, well, you could have made mistakes when you wrote the design in the first place. You probably did make mistakes when you wrote the design in the first place. But there's a feeling of like I ran it and the math proved to me. That's correct.

HW: Right.

MB: And I think there's the other side of it that people are also super intimidated by the math. And so to take something like first order logic, that is a very different fish from, let's say, algebra and and putting it under the heading of math--

HW: We're talking algebra-- for the record, just because I'm a math head-- we're talking about algebra as in like high school algebra. Right?

MB: Yes.

HW: Not like algebra as in college algebra. Ok.

MB: Yeah. As in, as in, high school algebra.

HW: Polynomial rings, algebra of polynomial rings. FINITE polynomial rings!

MB: All right. I give up. You win. I think it creates-- people self select, right?

HW: Right.

MB: People who for whom first order logic and some more concepts would be extremely valuable in the practice of their discipline avoid it because they're told that it's math and they assume that all math is like what they learned in high school that they found difficult or confusing or unpleasant. Right? And it's like sometimes I don't want to like-- if we continue this conversation much longer we're going to have to put it on Tick-Tock, because this is apparently where people discuss the philosophy of what is math?

MB: (laughs) But, it's like, I did. I do feel very strongly like, you know, why are these concepts considered math versus philosophy, for example?

HB: Well, for a long time, science was considered philosophy.

MB: Really? When did it make the switch from being something other...

HW: I think it must have been the 1880s when it-- when it stopped being natural philosophy and became science. I forget the exact details, but and for a long time, math was philosophy, too. And a lot of linguistics used to be philosophy of language, like philosophy essentially is just this big, like miscellaneous bin where we shove everything that we don't know how to classify.

MB: Well doctor of philosophy, right?

HW: This is not nice to philosophy and like rude to philosophy, but like there is not as much difference between philosophy and the disciplines that spawned from philosophy as we tend to think.

MB: My mind's been lingering on the edge of this problem. A few weeks ago I rediscovered some literature I had collected on the cybernetics movement of the 40ties and 50ties. Cyberneticians were very into modeling systems. They believed that everything could be modeled as systems of feedback loops and potentially quantified. From a modern perspective when we think of math and science as truth and human behavior as faulty, variable, inherently illogical, emotional and biased. Cybernetics seems like a pseudoscience. But this assumes that math is objective and that its precision comes from natural laws that are themselves mathematical. In other words: did humans invent math or did humans discover math?

MB: But... what if math is not infallible?

MB: For example: we cannot really accurately represent π but for most cases 10 digits is precise enough. All calculations done with π have some sort of rounding error built into them, but that never seems to matter much.

MB: In 2010 at the World Science Festival, Astrophysicist Mario Livio gave a full set of other examples:

ML: There was this astronomer, Johannes Kepler, and he made observations 400 years ago, and they were not very accurate. They were accurate for his time, but not very accurate. They were about accurate to within about four percent. Yet from those relatively scanty observations, Sir Isaac Newton managed to write a mathematical law of gravity that already by the 1950s was shown to be accurate to better than one part in a million. In fact, in an experiment done in 2008, they showed that the law inverse square law of Newton holds down to a distance of fifty six microns. One micron is one millionth of a meter. These are distances that Newton could have had no idea his mathematical law should hold true. What is it that gives mathematics such powers? We have a theory of everything that's electric and magnetic it's called quantum electrodynamics. In this theory, it's a highly mathematical theory. You know, electrons that move in atoms, they are like little magnets. You can use this theory to calculate the strength of this magnet. We can calculate the strength to parts per trillion. You calculate this magnet two parts per trillion. In 2006, this magnetic strength of the electron was measured two parts per trillion, and the two results agreed to within eight parts per trillion.

MB: What I really love about that clip is that Dr Livio talks about those things to convey the awesome impressive accuracy of math, but people like me hear it and think "...wait a minute ... so if you go smaller than that does math just stop working?"

MB: What we like to think of as mathematical laws that govern the function of the universe are abstractions that eventually break down at some point, but the point in which that break down occurs at a scale so tiny that for most of the practical applications of math the probability of it mattering is insignificant. If you throw a dart at a dart board, it doesn't matter where you hit the bulls eye, just as long as you are within the boundaries of the bulls eye.

MB: Mathematical precision is a natural process of sciences maturing. As we continue to study, we make more observations and group them in patterns. Eventually we begin expressing these patterns in math. We continue to document more observations, refining the math until it reaches a level of accuracy in which the gap between the abstraction we've created and "reality" is irrelevant. What gives math its magical quality is that occasionally the mathematicians develop the model before the reality they are modeling has been observed. 

MB: In my opinion, building models looking for correctness, proofs, and guarantees is the wrong way to model. Models should be thought experiments letting us test the relationships between various abstractions so that we know where to look for real world observations for further study. But I also believe that technology suggests to its user how it should be used. If users are using modeling languages as proofs it might be because the languages themselves encourage that behavior.

HW: The biggest barrier by far to formal methods adoption, at least on a design level, is UX.

MB: Yeah,

HW: If you could improve the UX--

MB: What I find really interesting--I'm going to interrupt you for a second, sorry. What I find really interesting about UX is that it is inherently subjective, right? Like I found TLA+'s syntax to be really, really confusing and hard to grasp what it was trying to get me to do.

MB: And then hard to remember, it was like it had to be just straightforward memorization on some of this stuff.

HW: Yeah.

MB: And then I started like-- in a different train of thought for a different purpose I started learning for first order logic and as soon as I started learning first order logic I kind of went-- Oh OK, wait. Oh... I understand! I understand why the syntax is the way it is.

MB: It's like you have basically taken the user experience of first order logic and put it into a programming language. And that is why you want me to write these things the way they are. And now I understand all the errors that sort of came before.

MB: So I feel like if you were coming from the perspective of having a good foundation in first order logic and maybe some other aspects of formal computer science and/or math that you would look at TLA+ and go, oh, this is... I'm very comfortable in this medium. Where is your coming from 'I just graduated from a boot camp or I'm a self-taught software engineer and like the stuff that I'm familiar with are Python and Ruby,' you would look at TLA+ and go, what the hell is this? And why doesn't it work the way I think it should?

HW: So your basic argument is that TLA+ syntax might not be an issue. It's more of just the background you have.

MB: I think I'm arguing that there is no such thing as good UX. It's very conditional

HW: Here's my counterproposal.

MB: Yes?

HW: Running TLA+ from the command line and having the output error output as a JSON, you can pipe it to some other tool.

MB: And what would that other tool be?

HW: Well, people could build other tools, it could be something that made it easier to sort of explore. You could have it be a visualizer. You could have it be like pump it back into like a language server provider. That way you can have the errors appear in your like having errors appear in your editor. Having the ability to just like step through it step by step or drop it into a table. Like that's the UX I'm talking about. Command line tooling like better error messages-- When you get a syntax error-- in like pretty much any formal method that I've used-- The errors are always very, very like obtuse. That could be improved dramatically.

MB: Right.

HW: When I say UX I don't mean like using mathematical symbols, which I personally strongly prefer to using programmatic symbols, which a lot of programmers find easier. I'm talking about things like: not having to use the IDE, being able to script out running multiple models, bash script, things like that.

MB: OK, cool. Yeah, no, that's fair.

MB: You've been listening to Marianne Writes a Programming Language. Just a quick reminder that patrons get access to all my extras. My conversation with Hillel Wayne, for example, covered so much more ground than what's in this episode.

Top comments (0)