My first ebook, “Practice Makes Python” — containing 50 exercises that will help to sharpen your Python skills — is now available for early-bird purchase!
The book is already about 130 pages (and 26,000 words) long, containing about 40 exercises on such subjects as basic data structures, working with files, functional programming, and object-oriented development. But it’s not quite done, and thus I’m calling this an “early-bird” purchase of the book: Not all of the exercises are ready, the formatting isn’t quite there yet, and PDF is the only format available for now. That said, even in this draft version, there is more than enough here to help many Python developers to gain fluency and improve their skills with the language.
Anyone who purchases the book now can use the coupon code EARLY to get a 10% discount. Perhaps it goes without saying, but anyone buying the book now will also get all updates and improvements, free of charge, as they occur over the coming weeks. And anyone who finds that they didn’t get value from the book is welcome to e-mail me and say so — and I’ll refund 100 percent of your purchase price.
The basic idea behind “Practice Makes Python” is that learning Python — or any language — is a long, slow process. Even the best courses cannot possibly give you enough practice with the language for it to feel natural. That only comes with practice. Most people end up practicing, as it were, on projects at work. My goal with this book is to give people who have taken Python courses a chance to become more familiar with the language.
My PhD studies in Learning Sciences taught me a great deal about how people learn, and one of the most important lessons was that of “constructionism” — that one of the best ways to learn is through the creation of things that are important to the individual. I have tried to make the exercises in “Practice Makes Python” interesting and fun, as well as relevant to what people do with Python on a day-to-day basis. Perhaps you won’t be creating Pig Latin translation programs in your day job, but the techniques that you learn from writing such programs in the book will undoubtedly help you out. Certainly, by working through the exercises — not by reading the answers and discussions! — you will learn a great deal about Python programming.
If you recently took a course in Python, or even if you have been working with it for up to a year, I believe that “Practice Makes Python” will give you the knowledge and confidence you need to master this fun and interesting language. These exercises are based on the many Python courses I have taught in the United States, Europe, Israel, and China over the years, and have proven themselves to help programmers start to really “get” Python.
I’d be delighted to hear what you think about “Practice Makes Python,” and how it can help to improve people’s Python programming skills even more. Contact me at firstname.lastname@example.org if you have thoughts or ideas.
In order to get an undergraduate degree from MIT, at least when I was there, you needed to take a certain number of humanities and social-science courses. This was to stop you from coming out a complete one-dimensional student; the idea was that rounding out your education with knowledge from other fields was good for you as a person, and also good for you as an engineer or scientist. (And yes, I realize that not everyone at MIT studied science or engineering, but those were the overwhelming favorites.) One of the most popular social sciences that people took was economics — which is a social science, although MIT’s version actually included a great deal of math.
At the time, I tended to be quite dismissive of economics. I didn’t think that it could possibly be interesting, or why so many of my friends were taking so many courses in that field. What insights could they gain?
And then, just before I graduated, MIT Press had one of its amazing sales on overstock books. I bought a book by a then-professor at MIT, named Paul Krugman, called “The Age of Diminished Expectations.” Reading this book was quite a revelation for me; I suddenly realized that economics was complex, fascinating, and described the world in all sorts of interesting ways. For years, I read and followed Krugman’s writing, in Slate and then (of course) the New York Times, gleaning what I could about economics. (I also happen to subscribe to many of his political views, but that’s secondary.) Whenever I could find an interesting and well-written book about economics, I would get it, because I found it to be so interesting and compelling.
Several years ago, a friend asked if I had read “The Undercover Economist,” by Tim Harford. I hadn’t, but decided that perhaps it was worth a read, and found it to be delightful, but in a different way from Krugman. Harford isn’t an economics researcher, but he knows just how to put economics research into words and a perspective that everyone can understand. His examples are often drawn from pop culture, and he’s able to distill the academic debates and intrigue to their essence. The fact that he’s very funny only adds to his appeal. I’ve since become quite a fan of Harford’s, listening (among other things) to the “More or Less” podcast from the BBC that he hosts, a sort of Mythbusters of statistics (Mathbusters?).
So it should come as no surprise that I ordered his latest book, “The Undercover Economist Strikes Back,” almost as soon as it came out earlier this year. I just read it cover to cover over the weekend, and came away delighted. As someone who has been reading Krugman’s work for years, and who also listens to NPR’s excellent Planet Money podcast, I can’t say that there was a huge amount of new information in this book. But it was written so well, and put things into such nice context, that this doesn’t matter.
Harford has a gift for making economics not only understandable, but also interesting and relevant to our own lives. In The Undercover Economist, he describes microeconomics, which describes how businesses and individuals respond to incentives. In this new book, he describes macroeconomics, which is a different kettle of fish altogether — it’s about how governments and economies work. If you think of macroeconomics as a complex system, then it’s no surprise that the aggregate behaves differently from its individual, constituent agents. (This, it took me many years to learn, is a much better explanation than what economics instructors tell their students, which is simply that “macro is different from micro.”)
The book talks about all sorts of great stuff, starting with recessions, moving onto unemployment, and covering a large number of topics that are in the newspaper each day, that affect each and every one of us, and which probably seem strange or detached from our reality, but which are actually quite important — particularly if you’re in a democracy, and need to separate real economics from crazy talk.
Harford includes a great definition and discussion of what money is, and brings up the famous story of the island of Yap, which used huge, largely immovable stones as money. He also introduces the different schools of thought on the subject, and where (and how) they differ — and how much of what politicians in the US and Europe have been saying and doing over the last five years has been foolish or misplaced.
The question-and-answer format in which he wrote the book is a little tedious, but much less than I expected it to be. Really? Yes, really.
In my mind, perhaps the topic that was most obviously missing from the book was a discussion of currency, and how that can affect an economy. If you live in the US, or even in England or Europe, you can largely ignore currency issues. Sure, there are exchange rates, and yes, they affect you to some degree, but it’s not a huge deal.
In Israel, by contrast, the exchange rate is a huge deal, because Israel imports and exports so much. The dollar’s rise and fall affects everyone, from high-tech software companies to people shopping at the supermarket. The ways in which the Bank of Israel played with exchange rates and buying dollars, just to keep things relatively stable (while claiming that they were doing no such thing) are impressive, and point to the sorts of challenges that small, trade-oriented economies have but that large ones don’t. I’m not sure if this was an omission due to time or space constraints, or if as someone living in England, Harford hasn’t had to think or worry much about currency issues.
I’ve changed my tune 100 percent since I was an undergrad; i now find economics to be totally fascinating, and very much enjoy reading the sorts of book that Harford has put out. If you’ve always wondered what macroeconomics is, or what the newspapers is talking about when they mentioned recessions, or whether the politicians suggesting budget cuts during the latest recession were saying the most obviously brilliant thing or the most foolish thing imaginable, Harford’s book is a fun, interesting read, and is highly recommended.
Friends of mine, who are not software developers, have a small, retail Internet business. The original developers created the application in Python, and my friends are looking for a full-stack Web/Python developer to help them. Frustrated with their inability to find someone who can commit to their project, my friends have decided to hire offshore developers, which is another way of saying, “cheap programmers in Eastern Europe or India.”
Earlier this week, these friends e-mailed me the resumes of three Ukrainian programmers, asking me which seemed most appropriate, and what questions they should be asking.
But here’s the thing: Technical skill isn’t the primary consideration when hiring a developer. This is doubly true when hiring an offshore developer. That’s because the problems that I’ve seen with offshore programmers aren’t technical, but managerial. As I told my friends, you would much rather have a so-so Python programmer who is reliable, and communicative than a genius programmer who is unreliable, or uncommunicative. The sad fact is that many of the offshore outsourcing companies have talented programmers, but poor management and leadership, leading to breakdowns in communication, transparency, and scheduling, rather than technology.
Sure, a developer might know the latest object-oriented techniques, or know how to create a RESTful JSON API in his or her sleep. But the programmer’s job isn’t to do those things. Rather, the programmer’s job is to do whatever the business needs to grow and improve. If that requires fancy-shmancy programming techniques and algorithms, then great. But most of the time, it just requires someone willing to pay attention to the project’s needs and schedule, writing simple and reliable code that’s necessary for the business to succeed.
The questions that you should be asking an offshore developer aren’t that different from the ones that you should be asking a developer in your own country, who speaks your language, and lives in your time zone. Specifically, you should be asking about their communication patterns and processes. Of course, you don’t want a dunce working on your programming project — but good communication and processes will smoke out such a person very quickly.
If there are no plans or expectations for communication, then you’re basically hoping that the developer knows what you want, that he or she will do it immediately, and that things won’t change — a situation that is pretty much impossible.
Good processes and a good developer will lead to a successful project. Good processes and a bad developer will make it clear that the developer needs to go, and soon. Bad processes and a developer of any sort will make it hard to measure performance, leading to frustrating on everyone’s part — and probably missed deadlines, overspent budgets, and more.
So I told my friends that they should get back to these Ukrainian programmers, and ask them the following questions:
The answers to these questions are far, far more important than the technical skills of the person you’re hiring. Moreover, these are things that we can test empirically: If the developer doesn’t do one or more of them, we’ll know right away, and can find out what is going wrong.
If the developer is good, then he or she will encourage you to set up a task tracker, meet every day (or at least, every other day) to review where things are. You’ll hear that automated testing is part of the development progress, and that of course it’s possible to download, install, and run the application on any compatible computer.
If the developer hedges on these things, or if he or she asks you to trust him, then that’s a bad sign. Truth be told, the developer might be fantastic, brilliant, and do everything you want. But do you want to take that risk?
If the developer has regular communication with you, tests their code, and allows you to download and run the application on your own, then you’re in a position to either praise them and keep the relationship going — or discover that things aren’t good, and shut it down right away.
Which brings me to my final point: With these sorts of communication practices in place, you’ll very quickly discover if the developers are doing what they promised. If so, then that’s great for everything. But if not, then you’ll know this within a week or less — and then you can get rid of them.
There are plenty of talented software developers in the world, but there are many fewer who both understand your business and make its success a priority. A developer who values your business will want to demonstrate value and progress on a very regular basis. Someone who cannot demonstrate value and progress probably isn’t deserving of your attention or money, regardless of where they live or what language they speak. But if you can find someone excellent, who values you and your business, and who wants to help you succeed? Then by all means, hire them — and it doesn’t matter whether they’re in Ukraine, or anywhere else.
What questions do you ask offshore developers before hiring them?
It’s always fun to start a new project. I should know; I’ve been a consultant since 1995, and have started hundreds of projects of various shapes and sizes. It’s tempting, when I first meet a new client and come to an agreement, to dive right into the code, and start trying to solve their problems.
But that would be a mistake.
More important than code, more important than servers, more important than even finding out what problems I’m supposed to be solving, is the issue of communication. How will the client communicate their questions and problems to me? How will I tell them what I am doing? Even more importantly, how I will I tell them where I’m having problems, or need help?
Before you begin to code, you need to set up two things: First, a time and frequency of meeting. Will it be every day at 8 a.m.? Every Monday at 2 p.m.? Tuesdays and Thursdays at 12 noon? It doesn’t matter that much, although I have found that daily morning meetings are a good way to start the day. (When you work on an international team, though, someone’s “morning” meeting is someone else’s evening meeting.) These meetings, whether you want to call them standups, weekly reviews, or something else, are to make sure that everyone is on the same page. Are there problems? Issues? Bugs? New feature requests? Is someone stuck, and needs help? All of that can be discussed in the meeting. And by setting a regular time for the meeting, you raise the chances that when something goes wrong (and it will), there will be a convenient time and place to discuss the problems.
I’m actually of the opinion that it’s often good to have both a daily meeting (for daily updates) and a weekly one (for review and planning). Whatever works for you, stick with it. But you want it to be on everyone’s schedule.
The second thing that you should do is set up a task tracker. Whether it’s Redmine, Trello, GitHub issues, or even Pivotal Tracker, every software project should have such a task tracker. They come in all shapes, sizes, and price points, including free. A task tracker allows you to know, at a glance, what tasks are finished, which are being worked on right now, and which are next in line. A task tracker lets you prioritize tasks for the coming days. And it allows you to keep track of who is doing what.
Once you have set up the tracker and meeting times, you can meet to discuss initial priorities, putting these tasks (or “stories,” as the cool agile kids like to say) in the tracker. Now, when a developer isn’t sure what to work on next, he or she can go to the task tracker and simply pick the top things off of the list.
This isn’t actually all that hard to do. But it makes a world of difference when working on a project.
Several months ago, I was teaching an introductory Python course, and I happened to mention the fact that I use Git for all of my version-control needs. I think that I would have gotten a more positive response if I had told them that my hobby is kicking puppies.
The reactions were roughly — and I’m not exaggerating here — something like, “What? You use Git?!? That so-called version control system whose main feature is eating our files?!?” And I got this not just from one person, but from all 20-something people who were taking my Python course. The more experience they had with Git, the more violently negative their reactions were.
I managed to calm them down a bit, and tried to tell them that Git is a wonderful system, except for one little problem, namely the fact that its interface is very hard to understand. But, I promised them, once you understand how Git works, and once you start to work with it within the context of understanding what it’s doing, things start to make sense, and you can really enjoy and appreciate the system.
I should note that since that Python class, I’ve returned to the same company to give two day-long Git classes. Based on the feedback I received, the Git class was very helpful, and I’m guessing that this is because I concentrated on what Git is really doing, and how the commands map to those actions. I’m pretty sure that people from that class are starting to appreciate the power and flexibility of Git, rather than focusing only on their frustrations with it.
However, my experience working with and teaching Git have taught me a great deal about designing both software and UIs. We love to say and think that excellent products with terrible marketing never get anywhere. And in the commercial world, that might well be true. Everyone loves to quote the movie “Field of Dreams” (which I never really liked anyway), and how the main character builds a baseball field after repeatedly hearing, “If you build it, they will come.” As numerous other people have said, this is not the case for businesses: If you build it, they probably won’t come, unless you’ve invested time and money in marketing your product.
However, in the open-source world, we expect to invest time in learning a technology, and are generally more technical folks in any event. Thus, we tend to be more forgiving of bad UIs, focusing on features rather than design. It’s thus possible for something brilliant, efficient, flexible, and profoundly frustrating for new users to become popular. Git is a perfect example of this.
Now, I happen to think that Git is one of the most brilliant pieces of software I’ve ever seen. Really, it’s impressively designed. However, the commands are counter-intuitive for many people who used other version-control systems, and it’s possible to get yourself into a situation from which an expert can extract himself or herself, but in which a novice is completely befuddled. Once you understand how Git works (brilliantly described in this video), things start to make sense. But getting to that point can take a great deal of time, and not everyone has that time.
In open source, then, “If you build it, they will come” might sometimes work. However, even if they do come, and even if they use the software that you have written, you might end up in a particularly unenviable situation: People will use the software, but will hate you for the way in which you designed it.
The upshot, then, is that it’s worth taking a bit of time to think about your users, and how they will use your system. It’s worth taking the time to create an interface (including commands) that will make sense for people. Look at WordPress, for example: It packs in a great deal of functionality, but also pays attention to the UI… and as a result, has become a hugely dominant part of the Web ecosystem.
Sure, Git is famous and popular, and I’m one of its biggest fans, at least in terms of functionality. But if Linus had spent just a bit more time thinking about command names, or behaviors, I think that we would have had an equally powerful tool, but with fewer people in need of courses to understand why their files are getting trampled.
If there’s anything that software people know, it’s that changing one part of a program can result in a change in a seemingly unrelated part of the program. That’s why automated testing is so powerful; it can show you when you have made a mistake that you not only didn’t intend, but that you didn’t expect.
If unexpected results can happen in a system that you control and supposedly understand, it’s not hard to imagine what happens when the results of your changes involve many pieces of software other than yours, running on computers other than yours, being used by customers who aren’t yours.
This would appear to be the situation with one of the latest anti-spam and security features for e-mail, known as DMARC.
I’m not intimately familiar with this standard, but I’ve seen other standards relating to e-mail in the past to know that anything having to do with e-mail will be frustrating for some of the people involved. E-mail is in use by so many people, on so many computers, and by so many different programs, that you can’t possibly make changes without someone getting upset. Nevertheless, the DMARC implementation and rollout by a number of large e-mail providers over the last few weeks has been causing trouble.
Let me explain: DMARC promises, to some degree, to reduce the amount of spam that we get by verifying that the sender’s e-mail address (in the “From” field) matches the server from which the e-mail was sent. So if you get e-mail from me, with a “From” address of “email@example.com”, DMARC will verify that the e-mail was really sent from the lerner.co.il server. To anyone who has received spam, or fake messages, or illegal “phishing” messages, this sounds like a great thing: No longer will you get messages from your friend with a hotmail.com address, asking for money now that they’re stranded in London. It really, admirably aims to reduce the number of such messages.
How? Very simply, by checking that the “From” address in the message matches the server from which the message was sent. If your DMARC-compliant server receives e-mail from “firstname.lastname@example.org”, but the server was some anonymous IP address in Mongolia, your server will refuse to receive the e-mail message.
So far, so good. But of course, for every rule, there are exceptions. Consider, for example, e-mail lists: When someone posts to a list, the “From” address is preserved, so that the message appears to be coming from the sender. But in fact, the message isn’t coming from the sender. Rather, it’s coming from the e-mail program running on a server.
For example, if I (email@example.com) send e-mail to a mailing list (firstname.lastname@example.org), the e-mail will really be coming from the example.com server. But it’ll have a “From” address of email@example.com. So now, if a receiver is using DMARC, they’ll see the discrepancy, and refuse to receive the e-mail message.
If lerner.co.il is using DMARC in the strictest way possible, then firstname.lastname@example.org sending to email@example.com will have especially unpleasant consequences: lerner.co.il will refuse to receive its own subscriber’s message to the list, because DMARC will show it to be a fake. These refusals will count as a “bounce” on the mailing list, meaning a message that failed to get to the recipient’s inbox. Enough such bounces, and everyone at lerner.co.il will be unsubscribed.
Yes, this means that if your e-mail provider uses DMARC, and if you subscribe to an e-mail list, then posting to such a list may result (eventually) in every other user of your provider being unsubscribed from the list!
I’ve witnessed this myself over the last few weeks, as members of a large e-mail list I maintain for residents of my city have slowly but surely been unsubscribed. Simply put, any time that a Hotmail, Yahoo, or AOL users posts to the list for Modi’in residents, all of these companies (and perhaps more) refuse the message. This refusal increases the number of bounces attributed to the users, and eventually results in mass auto-subscriptions.
As if that weren’t bad enough (and yes, it’s pretty bad), people who have been passively reading (i.e., not participating) in the e-mail list for years are now getting cryptic messages from the list-management software, saying that they have been unsubscribed because of excessive bounces. Most people have no idea what this means, which in turn leads to the list managers (such as me) having to explain intricate e-mail policy issues.
There are some solutions to this problem, of course. But they’re all bad, so far as I can tell, and came without any serious warning or notification. And when it comes to e-mail, you really don’t want to start rejecting message en masse without warning. The potential solutions are:
And by the way, it’s not just little guys like me who are suffering. The IETF, which writes the standards that make the Internet work, recently discovered that their e-mail lists are failing, too.
E-mail lists are incredibly useful tools, used by many millions (and perhaps billions) of people around the world. You really don’t want to mess with how they work unless there’s a very good reason to do so. Yes, spam and fraud are big problems, and I welcome the chance to change them.
But really, would it have been so hard to contact all of the list-management software makers (how many can there be?) and work out some sort of deal? Or at least get the message out to those of us running lists that this is going to happen? I have personally spent many hours now researching this problem, and trying to find a solution for my list subscribers, with little or no success.
This all brings me back to my original point: The intentions here were good, and DMARC sounds like a good idea overall. But it is affecting, in a very negative way, a very large number of people who are now suddenly, and to their surprise, cut off from their friends, colleagues, workplaces, and organizations. The fact that AOL and other e-mail providers are saying, “Well, you’ll just need to reconfigure your list software,” without considering whether we want to do this, or whether e-mail lists really need to change after more than two decades (!) of working in a certain way, is rather surprising to me. I’m not sure if there’s any way back, but I certainly hope that this is the last time such a drastic, negative solution is foisted on the public in this way.
Several weeks ago, my wife and I saw a wonderful play at our local theater in Modi’in (“Mother Courage and Her Children“). At the end, the actors came out to receive their richly deserved applause. Three times, the actors came out, took their bows, and were warmly applauded by the audience. We loved their performance — but just as importantly, they loved performing, and they loved to see and hear the reactions from the audience, both during and after the play.
I’m sure that some or all of these actors have worked in television and the movies; Israel is a small country, and it’s hard for me to believe that actors can decide only to work in a single medium. But I’ve often heard that actors prefer to work on stage, because they can have a connection with the audience. When they say something funny, sad, or upsetting, they can feel (and even hear) the audience’s reaction.
But while we often hear about TV and movie stars making many millions of dollars off of their work, it’s less common for stage actors to make that kind of money. That’s because when you act on stage, you’re by definition limiting your audience to the number of people who can fit in a theater. Even the largest theaters aren’t going to hold more than a few hundred seats; by contrast, even a semi-successful TV show or movie will get tens or hundreds of thousands of viewers on a given night. (And yes, TV and film have many more expenses than plays do — but the fact remains that you can scale up the number of TV and film viewers much more easily than you can a play. Plus, movies and TV can both be shown in reruns.)
Another difference is the effort that you need to put into a stage production, as opposed to a TV program or a movie: In the former case, you need to perform each and every night. In the latter, you record your performance once — and yes, it’ll probably require multiple takes — and then it can be shown any number of times in the future. You can even be acting on stage while your TV show is broadcast. Or more than one of your movies can be shown simultaneously, in thousands of cities around the world.
What does this have to do with me? And why have I been thinking about this so much over the last few weeks, since seeing that play?
While I’m a software developer and consultant, I also spend a not-insignificant time teaching people: In any given week, I will give 2-4 full days of classes in Python, Ruby, Ruby on Rails, PostgreSQL, and Git, with other classes likely to come in the next few months.
I’m starting to dip my toes into the waters of teaching online, and hope to do it increasingly frequently over the coming months and years. But unlike most online programming courses currently being offered, I intend to make most or all of my courses real-time, live, and in person.
This has some obvious disadvantages: It means that people will need to be available during the precise hours that I’m teaching. It means that the course will have to be higher in price than a pre-recorded video course, because I cannot amortize my time investment over many different purchases and viewings. And it means that the course is limited in size; I cannot imagine teaching more than 10 people online, just as I won’t teach an in-person class with more than 20 people.
Given all of these disadvantages, why would I prefer to do things this way, live and in person?
The answer, in a word, is: Interactions.
I’m finishing my PhD in Learning Sciences, and if there’s anything that I have gained from my studies and research, it’s that personal interactions are the key to deep learning. That’s why my research is all about online collaboration; I deeply believe that it’s easiest and best to learn when you speak with, ask questions of, challenge, and collaborate with others, ideally when you’re trying to solve a problem.
I’m not saying that it’s impossible to learn on your own; I certainly spend enough hours each week watching screencasts and lectures, and reading blog posts, to demonstrate that it’s possible, pleasurable, and beneficial to learn in these ways. But if you want to understand a subject deeply, then you should communicate somehow with other people.
That’s one of the reasons why pair programming is so helpful, improving both the resulting software and the programmers who engage in the pairing. That’s why open source is so successful — because in a high-quality open-source project, you’ll have people constantly interacting, discussing, arguing, and finally agreeing on the best way to do things. And that’s why I constantly encourage participants in my classes to work together when they’re working on the exercises that I ask them to solve: Talking to someone else will help you to learn better, more quickly, and more deeply.
I thus believe that attending an in-person class offers many advantages over seeing a recorded screencast or lecture, not because the content is necessarily better, but because you have the opportunity to ask questions, to interact with the teacher, to clarify points that weren’t obvious the first time around, and to ask how you might be able to integrate the lectures into your existing work environment.
So for the students, an in-person class is a huge win. What do I get out of it? Why do I prefer to teach in person?
To answer that, I return to the topic with which I started this post, namely actors who prefer to work on stage, rather than on TV and in movies. When I give a course, it’s almost like I’m putting on a one-man show. Just as actors can give the same performance night after night without getting bored, I can give the same “introduction to Python” course dozens of times a year without tiring of it. (And yes, I do constantly update my course materials — but even so, the class has stayed largely the same for some time.) I’m putting on a show, albeit an interactive and educational one, and while I put on the same show time after time, I don’t get tired of it.
And the reason that I don’t get tired of it? Those same interactions, which are so beneficial to the students’ learning and progress, are good for me, as the instructor. They keep me on my toes, allow me to know what is working (and what isn’t), provide me with an opportunity to dive more deeply into a subject that is of particular interest to the participants, and assure me that the topics I’m covering are useful and important for the people taking my class.
I live and work in Israel, and one of the things that I love about teaching Israelis is that I’m almost guaranteed to be challenged and questioned at nearly ever turn. Israelis are, by nature, antagonistic toward authority. As a result, my lectures are constantly interrupted by questions, challenges, and requests for proof.
I have grown so accustomed to this way of things, that it once backfired on me: Years ago, I gave a one-day course in the US that ended at lunchtime — it turns out that the Americans were very polite and quiet, and didn’t ask any questions, allowing me to get through an entire day’s worth of material in just half of the time. I have since learned to make cultural adjustments to the number of slides I prepare for a given day, depending on where I will be teaching!
When I look at stage actors, and see them giving the same performance that they have given an untold number of times in the past, I now understand where they’re coming from. For them, each night gives them a chance to expose a new audience to the ideas that they’re trying to get across through their characters and dialogue. And yes, they could do that in a movie — but then they would be missing the interactions that they have with the audience, which provide a sense of excitement that’s hard to match.
Does this mean that I won’t ever record screencasts or lectures? No, I’m sure that I will do that at some point, and I already have some ideas for doing so. But they’ll be fundamentally different from the courses that I teach, complementing the full-length courses, rather than replacing them. At the end of the day, I get a great deal of satisfaction from lecturing and teaching, both because I see that people are learning (and thus gaining a useful skill), and because my interactions with them are so precious to me, as an instructor.
One of the most celebrated phrases that has emerged from Ruby on Rails is “convention over configuration.” The basic idea is that software can traditionally be used in many different ways, and that we can customize it using configuration files. Over the years, configuration files for many types of software have become huge; installing software might be easy, but configuring it can be difficult. Moreover, given the option, everyone will configure software differently. This means that when you join a new project, you need to learn that project’s specific configuration and quirks.
“Convention over configuration” is the idea that we can make everyone’s lives easier if we agree to restrict our freedom. Ruby on Rails does this by telling you precisely what your directories will be named, and where they will be located. Rails tells you what to call your database tables, your class names, and even your filenames. The Ruby language, while generally quite open and flexible, also enforces certain conventions: Class and module names must begin with capital letters, for example.
It can take some time for developers to accept these conventions. Indeed, I was one of them: When I first started to work with Rails, I was somewhat offended to be told precisely what my database column names would be, especially when those names contradicted advice that I had heard and adopted years ago. (The advice was to prefix every column in a database table with the name of the table, which would make it more easily readable in joins. Thus the primary key of the “People” table would be person_id, followed by person_first_name, person_last_name, and so forth.) Over time, I have grown not only to use these Rails conventions, but to enjoy working with them; it turns out that people can changes pretty easily, at least when it comes to these arbitrary decisions.
The real benefit of such conventions has nothing to do with my own work. Rather, it reduces the need for communication among people working on the same project. If everyone does it the same way, then there are fewer things to negotiate, and we can all concentrate on the real problems, rather than the ones which are relatively arbitrary.
Back in college, I was the editor of the student newspaper. We, like many newspapers, used the AP Stylebook to determine the style that we would use. The AP Stylebook was our bible; whatever it said, we did. Of course, we also had our own local style, to cover things that AP didn’t, such as building names and numbers (e.g., we could refer to “Building 54″). In some cases, I personally disagreed with the AP Stylebook, especially when it came to the “Oxford comma.” But by keeping that rule, we were able to download articles from the Washington Post and LA Times, and stick them into our newspaper with minimal editing. Again, I prefer the serial comma, and use it in my personal writing. By adhering to a standard, I was able to ensure consistency in our writing, and reduce the workload of the (already hard-working) newspaper staff.
Twice in the last few weeks, I’ve been reminded of the benefits of convention over configuration — both times, when developers on projects I inherited decided to flout the rules. Their decisions weren’t wrong, but they were so wildly different from the conventions of Rails that they caused trouble, delays, and bugs.
So you can imagine my surprise when I looked for the application.js file, and didn’t find it. That was bad enough, but the asset pipeline mechanism, as well as the deployment scripts I was developing, got rather confused by the absence of application.js. When I confronted the original developer about this, he told me that actually, he liked to call it something else entirely, reflecting the name of the application and client. Why? He didn’t really have a technical reason; it was all for reasons of aesthetics. The fact is that the rest of the Rails ecosystem expected application.js, though, so his decision meant that the rest of the software needed to be configured in a special, different way.
As a way of justifying his decision, the other developer told me, “Conventions shouldn’t be a boundary when developing.” No, just the opposite — the idea is that conventions are there to limit you, to tell you to work in a way that everyone else works, so that things will be smoother. In much of the world, we drive on the right side of the road. This is utterly random; as numerous countries (e.g., England) have proven, you can drive on the other side of the road just fine — but only so long as everyone is doing it. The moment everyone decides on their own conventions, big problems can occur.
When Biblical Hebrew wants to describe anarchy, it uses the phrase, “People did whatever was right in their own eyes.”
Something similar occurred with another project where I inherited code from someone else: One of my favorite things about Ruby on Rails is the fact that it runs the application in an “environment.” The three standard environments are development (which is optimized for developer speed, not for execution speed), production (which is optimized for execution speed), and test (which is meant for testing). The environments aren’t meant to change the application logic, but rather the way in which the application behaves. For example, I recently changed the way in which e-mail is sent to users of my dissertation software, the Modeling Commons. When I send the e-mail in the “production” environment, the e-mail is actually sent — but when I do so within the “development” environment, the e-mail is opened in a browser, so that I can examine it. This is standard and expected behavior; all Rails applications have development, production, and test environments — and some even havea “staging” environment, in which we prepare things.
My client’s software, which I inherited from someone else, decided to do something a bit different: The code was meant to be used on several different sites, each with slightly different logic. The developer decided to use Rails environments in order to distinguish between the logical functions. Thus, if you run the application under the “xyz” environment, you’ll get one logical path, and if you run the application under the “abc” environment, you’ll get another logical path.
It’s hard to describe the number of surprises and problems that this seemingly small decision has created: It means that we can’t really test the application using the normal Rails tools, because nothing will work correctly in the “test” environment. It means that the Phusion Passenger server that we installed to run the application needs an additional, special configuration parameter (not normally needed in production) to find the right database, and execute with the correct algorithms. It means that when you’re trying to trace through the logic of the application, you need to check the environment.
Basically, all of the things that you can assume about most Rails applications aren’t true in this one.
Now, the point of me writing this isn’t to say that I’m brilliant and that other developers are stupid — although it is true that Reuven’s First Law of Consulting states that a new consultant on a project must call his predecessor a moron. Rather, it’s to point to the fact that conventions are there for a reason, and that if you insist on ignoring them, you’ll be increasing the learning curve that other developers will need to work on your application. Now, if you have oodles of time and money, that’s just fine — but as a general rule, a developer’s time is a software company’s greatest expense, and anything you can do to increase productivity, and decrease the need for explanations and communication, is worthwhile.
By the way, this is the whole reason why one of the Python mantras is, “There’s only one way to do it” — a direct contrast with the Ruby and Perl mantra, “There’s more than one way to do it.” Having a single, common way to do things makes everyone’s code more similar readable, and easier to understand. It doesn’t stop you from doing brilliant and interesting things, but does ask that you demonstrate your brilliance within the context of established practice.
Of course, this doesn’t mean that conventions are written in stone, or that they are unchangeable. But if and when you ignore them, it should be for good reason. Even if you’re right, think about whether you’re so right that it’s worth having multiple people learn your way of doing things, instead of the way that they’re used to doing them.
What do you think? Have you see these sorts of issues in your work? Let me know!
Hello out there!
I’ve been privileged to work with many great people and companies since 1995, when I first started working as a consultant. I’ve helped companies to create Web applications from an idea, to learn programming languages, to improve their business processes, and to optimize their databases.
The time has come to describe some of what I’ve learned. Sure, I’ve given plenty of conference talks, and my Linux Journal column has been published every month since 1996. But there are all sorts of things that are too short, or too esoteric, for those forums, and this blog is where I can share some thoughts on the intersection between technology and society.
Given that I’m also finishing a PhD in Learning Sciences at Northwestern University, you can expect to see some comments about technology and education here, as well. You’re also welcome to check out the Modeling Commons, the collaborative platform for NetLogo modeling that I have created as part of my doctoral studies.
If you have any ideas, comments, or suggestions, I’m happy to hear them; always feel free to contact me at firstname.lastname@example.org. I read every message, and am happy to hear from clients and colleagues.