Stuff written by other people (that I liked)
tearfully reminds one of the old days of the pre-September Internet.
Linus “doesn't do github pull requests” and thinks it doesn't add much to git aside from hosting. I agree.
“Quantum Cryptography is as awesome as it is pointless” – Bruce Schneier
I've been reading a bit about Elliptic Curve Cryptography lately. Stumbled across a fantastic set of slides by Joseph H. Silverman. These slides are the only introductory material I've been able to find that touches on the question of why is it safe to use smaller keys with ECC than with the multiplicative groups of modular arithmetic or finite fields. My own interpretation of the answer is that they simply needed a group for which the index calculus algorithm doesn't work. I'm left wondering if ECC lacks efficient DLP algorithms simply because it hasn't been around as long as the other groups; apparently the crypto community has some sort of reason for thinking this isn't the case.
Scott Aaronson's witful mockery of the academic publishing companies.
""" Wait, people are still using XML? How do they get any work done?
I am resolutely opposed to that barbarous platinum-iridium relic. Let the fiat kilogram float!
Facebook and twitter are nifty, but please don't call them technology companies (unless you think that MTV, Warner Brothers, and EMI are tech companies too!)
Phil Wadler's excellent Proofs are Programs: 19th Century Logic and 21st Century Computing and Mike Gordon's From LCF to HOL should be required starting points for anybody interested in programming languages and type systems.
How to get Proof General to invoke coq with -fimpredicative-set; with newer versions of Proof General you need to put this at the end of your script:
*** Local Variables: ***
*** coq-prog-name: "coqtop" ***
*** coq-prog-args: ("-I" "." "-emacs" "-impredicative-set") ***
*** End: ***
William Wulf's Are We Scientists or Engineers? (or informaticians?).
Phil Wadler's law of language design
The Two Cultures of Mathematics explains the subtle divide between “combinatorics” and logicians in some math cultures.
Yaron Minksy's thoughts on how to set up distributed version control and “build daemons”.
Should I go to Grad School? is required reading for anybody submitting a grad school application.
Andrej Bauer's excellent description of the Kleene Tree
Random Type Theory Links
An example of a practical use for weakly positive (positive but not strictly-positive) types in System F.
Frank Pfenning's research on subtyping and intersection types is essential reading; although he has many papers on this subject, it seems that only this recent one gives an in-depth explanation of the motivation for separating “sorts” from types.
Dave Menendez has a great example that nicely summarizes one of the key advantages of intersection types:
twice :: (a -> b ∧ b -> c) -> a -> b
twice f = f . f
In the Calculus of Constructions, there is a very delicate relationship between proof-irrelevance, excluded middle, and various weak axioms of choice. This relationship is most succinctly revealed by the Hurkens Paradox.
There are numerous papers on this topic, but I've found only one that's approachable for a beginner: Barbanera and Berardi's Proof-irrelevance out of Excluded-middle and Choice in the Calculus of Constructions.
In particular, this paper explains how excluded middle and a strong choice axiom force proof-irrelevance. The extraction principles in Set can be viewed as a weak form of choice (definite description), and excluded middle lets one case-discriminate on whether or not a type is inhabited; this gives some intuition for the reason why -impredicative-set is incompatible with classical reasoning.
PEGs also have difficulties with what is called “language hiding”. Essentially, a prioritized choice
A = B / C
will never attempt to match C against a given region of text if B matches any region with the same starting point – but perhaps a different ending point.
There are two common examples of this:
A = "a" / "ab"
B = "a"* ("ab" / "c")
In both cases, the “ab” alternative will never be attempted. In the latter case this is due to the greedy nature of repetition.
Unfortunately, problems of this sort become much more difficult to debug in larger, more complex grammars. When grammars are built out of modules written by different authors, the problem becomes nearly intractable: imagine a production A = B / C where B and C are written by different authors. Changing B in a manner which strictly expands the set of strings it can match can cause A to go from matching a given string to no longer matching it.
Also, as far as I know, no packrat parser currently supports indirect left recursion via grammar transformation. Packrat parsers can be modified to handle left recursion without transformation, but only by sacrificing linear time parsability.