Notes I took during the r0ml talk:
James Burke – the day the universe changed
People look down their noses at primitive folks. What would it look like if the earth was flat. It would look the same. The difference is that we came up with better conceptual models
Same for computing paradigm shifts
Scientific revolutions happen in a punctuated equilibrium
First age – genesis – computers were new. What are they for? Are they useful
Efficiency for basic tasks was the focus
Steven levee’s “Hackers” talks about this
Second age – all about adoption.
Third age –
Three truisms that may no longer be true:
(Under new paradigm)
1. Reuse is a bad idea
2. Portability is a silly idea
3. Testing is an uninteresting waste of time
We care more about proper functioning than adoption now
Under new paradigm these three all shift/reverse
Reuse would be we good idea if when something broke you could discard the old one and replace it with a new one. You don’t want to fix the new one
Bastille molar (sp?) law. Smaller changes have a higher error density than larger ones
Figuring out what the old thing is doing is the bulk of the problem
If you make a small change, you probably spent less time thinking about what the whole thing did.
David Parnass – criteria to be used in decomposing systems into modules – awesome book
Modular way 1: take the things that it does and you could make it modules
Modular way 2: decomposes – have modules such that when you make a change in this program. You’ll have to change the fewest number of modules
This is the ideal solution according to Parnass
Replace instead of reuse
If it worked properly, I wouldn’t need the source code. Hence open source is a bad idea. Open source is premised under the idea that you will have to fix things in the first place
Immutability – don’t change this thing ever
Either it works right, don’t change it
Or doesn’t work right – throw it out. Get a better one
Big Data – has nothing to do with bigness
Volume. Velocity. Veracity. Validity
Big data is about immutability.
Prior worries were adoption and efficiency
CRUD – create release update Delete
Never change anything and record everything
Operational reasoning vs postulational reasoning
Most of our stuff is operational
What this would do if…
Postulational reasoning says “I’m going to postulate that this works”
I have theorem and then I prove it
What do I have to do to make that work in real life?
Haskell answer is: stop changing your variables
No loops. Etc
Why test a program that you’ve proven is gonna work.
Question: What does as proof of a program’s correctness look like?
Answer: I don’t know. We’re on the cusp. Haskell fulfills the role in this evolution that small talk fulfills.
“Given enough eyeballs all bugs are shallow” – this is untrue
Inspecting software finds 3 times the number of bugs for the same $ spent
Code review is better than testing
3 ppl is the sweet spot
Question: How will reaching the cusp of moore’s law change the future of development
Answer. Mores law was all about adoption
Some of the things we were worried about at the beginning of mores law will become things we worry about now – in a new context
We’ll care about the hardware we’re running on. Because reliability is the key desired trait
We’ll spend a lot time on software. Not worried about how fast it runs. We’ve passed that critical point. We’ll be more worried about reliability
Question: what do you mean by immutability in the code. And is that not version control?
Answer: Turbo tax upgrades by installing 100 patches instead of download, delete, replace like most programs. This is the future of deployment, because it minimally interferes with user and you don’t have to shut it down.
“Can you update a program without restarting it”
We think about that process as a mutability problem.
Arbitrarily small events can cause arbitrarily giant change
We used to think of version control as this place where you keep track of the changes to your code
“An ordered series of patches to a program that does nothing”
Can be applied in any order, taking dependencies into consideration.
Question: We don’t want to live in a world where everyone writes their own software…?
You prevent the shell shock bug by writing your own
Counterpoint: so all 10,000 tax companies should write their own software?
Answer: How hard should it be to write these things. And what tools should we bring to the equation. What you want is huge program that does stuff correctly
He Does haskell programs by calculating dependencies & takes the portion of those packages he needs and then pastes them into his own code. This makes it independent. By removing the dependencies, you increase stability and protect yourself from instability
Glue code take 3x as long and contains 3x as many errors
Literacy – writing vs reading
We prioritize writing over reading now (in the non-computing world)
In coding we don’t read bc we can just copy paste. If we read it we’d understand better
In adoption phase, everything conformed to standards to increase portability
Object oriented was like software “chips”
Question: You drew Moore’s law – if we switch from electrons to photons and doesn’t adopt this shape, does it change anything?
Answer: the curve is the adoption curve. The arguments I’m making are independent of Moore’s Law, because it’s a direct product of adoption. Due to paradigm shift, it’s not relevant. The problem isn’t cheaper/faster. The problem is reliability.
Question: what’s after big data?
Answer: what’s the 4th age of computing. If you look at that s curve, there’s only 3 sections of it. Everything is divided into 3 parts. Then the end.
Question: so what about operating systems. If everyone is writing their own stuff… ?
Answer: look up Clive. I call this phase of the process Sedimentary composition. It’s all about putting parts together and building huge dinosaurs. Somewhere along the line, the rats come in and smart is the thing over big.
In modern paradigm you reduce the number of components for the sake of reliability/occam’s razor