Over Chinese New Year I rediscovered and old favorite that I want to share with you today. Feynman Lectures on Computation, while not a novel (I said I would share novels) is so good that I don’t want to leave it sitting on my bookshelf any longer.
From 1983 to 1986, Richard Feynman taught a course at Caltech called “Potentialities and Limitations of Computing Machines.” Although computing during that time is quite different than today, most of the material is timeless and explained in the brilliant, yet whimsical, manner that Feynman is so famous for.
Take for example, where Feynman first introduces the concept of the Turing machine:
Turing’s idea was to make a machine that was kind of a analogue of a mathematician who has to follow a set of rules. The idea is that the mathematician has a long strip of paper broken up into squares, and what he sees puts him in some state of mind which determines what he writes in the next square. So imagine the guy’s brain having lots of different possible states which are mixed up and changed by looking at the strip of paper. After thinking along these lines and abstracting a bit, Turing came up with a kind of machine which is referred to as – surprise, surprise – a Turing machine. We will see that these machines are horribly inefficient and slow – so much so that no one would ever waste their time building one except for amusement – but that, if we are patient with them, they can do wonderful things.
What can you expect from this book? Well, Feynman himself explains it best: “These lectures are about what we can and can’t do with machines today, and why.” They are told in the style of imagining “you are explaining your ideas to your former smart, but ignorant self, at the beginning if your studies!” Absolutely fun and refreshing!
If you would like to read this book, tell three people about my company’s latest project, WikiReader, and then send me an email. Before next week, I’ll chose a name by random, and send the winner my book.
Shipping, anywhere in the world, is on me.