August 24, 2016 , bY reuven
Python

I’m in Shanghai, and before I left to teach this morning, I decided to check the weather.  I knew that it would be hot, but I wanted to double-check that it wasn’t going to rain — a rarity during Israeli summers, but not too unusual in Shanghai.

I entered “shanghai weather” into DuckDuckGo, and got the following:

Never mind that it gave me a weather report for the wrong Chinese city. Take a look at the humidity reading!  What’s going on there?  Am I supposed to worry that it’s ever-so-slightly more humid than 55%?

The answer, of course, is that many programming languages have problems with floating-point numbers.  Just as there’s no terminating decimal number to represent 1/3, lots of numbers are non-terminating when you use binary, which computers do.

As a result floats are inaccurate.  Just add 0.1 + 0.2 in many programming languages, and prepare to be astonished.  Wait, you don’t want to fire up a lot of languages? Here, someone has done it for you: http://0.30000000000000004.com/ (I really love this site.)

If you’re working with numbers that are particularly sensitive, then you shouldn’t be using floats. Rather, you should use integers, or use something like Python’s decimal.Decimal, which guarantees accuracy at the expense of time and space. For example:

>> from decimal import Decimal
>>> x = Decimal('0.1')
>>> y = Decimal('0.2')
>>> x + y
Decimal('0.3')
>>> float(x+y)
0.3

Of course, you should be careful not to create your decimals with floats:

>> x = Decimal(0.1)
>>> y = Decimal(0.2)
>>> x + y
Decimal('0.3000000000000000166533453694')

Why is this the case? Let’s take a look:

>> x
Decimal('0.1000000000000000055511151231257827021181583404541015625')

>>> y
Decimal('0.200000000000000011102230246251565404236316680908203125')

So, if you’re dealing with sensitive numbers, be sure not to use floats! And if you’re going outside in Shanghai today, it might be ever-so-slightly less humid than your weather forecast reports.

August 16, 2016 , bY reuven
Training

Over the last few years, my work has moved away from day-to-day software development, and more in the direction of technical training: Helping companies (and individuals) by teaching people how to solve problems in new ways.  Nowadays, I spend most of my time teaching courses in Python (at a variety of levels), regular expressions, data science, Git, and PostgreSQL.

And I have to say: I love it. I love helping people to do things they couldn’t do before.  I love meeting smart and interesting people who want to do their jobs better.  I love helping companies to become more efficient, and to solve problems they couldn’t solve before.  And I love the travel; next week, I leave for my 16th trip to China, and I’ll likely teach 5-6 classes in Europe before the year is over.

The thing is, I’m not alone: There are other people out there who do training, and who have the same feeling of excitement and satisfaction.

At the same time, trainers are somewhat lonely: To whom do we turn to improve our skills? Not our technical skills, but our skills as trainers? And our business skills as consultants who are looking to improve our knowledge of the training market?

Over the last year, I’ve started to help more and more people who are interested in becoming trainers. I’ve started a coaching practice. I’ve given Webinars and talks at conferences. I’ve started to work on a book on the subject.

But as of last week, I’ve also started a new, free community for technical trainers on Facebook. If you engage in training, or have always wanted to do so, then I invite you to join our new, free community on Facebook, at http://facebook.com/groups/techtraining .

I should note that this group is not for people running training businesses. Rather, it’s for the trainers themselves — the people who spend several days each month in a classroom, trying to get their ideas across in the best possible ways.

In this group, we’ll share ideas about (among other things):

  • How to find clients
  • How to prepare courses
  • What a good syllabus and/or proposals look like
  • How to decide whether a course is worth doing
  • How to price courses
  • Working on your own vs. via training companies
  • How to upsell new courses to your clients
  • How can education research help us to teach better

If you are a trainer, or want to be one, then I urge you to join our new community, at at http://facebook.com/groups/techtraining .  We’ve already had some great exchanges of ideas that will help us all to learn, grow, and improve. Join us, and contribute your voice to our discussion!

August 12, 2016 , bY reuven
Python

As many people know, one of the mantras of the Python programming language is, “There should be one, and only one, way to do it.”  (Use “import this” in your Python interactive shell to see the full list.)  However, there are often times when you could accomplish something in any of several ways. In such cases, it’s not always obvious which is the best one.

A student of mine recently e-mailed me, asking which is the most efficient way to concatenate strings in Python.

The results surprised me a bit — and gave me an opportunity to show her (and others) how to test such things.  I’m far from a benchmarking expert, but I do think that what I found gives some insights into concatenation.

First of all, let’s remember that Python provides us with several ways to concatenate strings.  We can use the + operator, for example:

>> 'abc' + 'def'
'abcdef'

We can also use the % operator, which can do much more than just concatenation, but which is a legitimate option:

>>> "%s%s" % ('abc', 'def')
'abcdef'

And as I’ve mentioned in previous blog posts, we also have a more modern way to do this, with the str.format method:

>>> '{0}{1}'.format('abc', 'def')
'abcdef'

As with the % operator, str.format is far more powerful than simple concatenation requires. But I figured that this would give me some insights into the relative speeds.

Now, how do we time things? In Jupyter (aka IPython), we can use the magic “timeit” command to run code.  I thus wrote four functions, each of which concatenates in a different way. I purposely used global variables (named “x” and “y”) to contain the original strings, and a local variable “z” in which to put the result.  The result was then returned from the function.  (We’ll play a bit with the values and definitions of “x” and “y” in a little bit.)

def concat1(): 
    z = x + y 
    return z 

 def concat2(): 
    z = "%s%s" % (x, y) 
    return z 

def concat3(): 
    z = "{}{}".format(x, y) 
    return z 

def concat4(): 
    z = "{0}{1}".format(x, y) 
    return z

I should note that concat3 and concat4 are almost identical, in that they both use str.format. The first uses the implicit locations of the parameters, and the second uses the explicit locations.  I decided that if I’m already benchmarking string concatenation, I might as well also find out if there’s any difference in speed when I give the parameters’ iindexes.

I then defined the two global variables:

x = 'abc' 
y = 'def'

Finally, I timed running each of these functions:

%timeit concat1()
%timeit concat2()
%timeit concat3()
%timeit concat4()

The results were as follows:

  • concat1: 153ns/loop
  • concat2: 275ns/loop
  • concat3: 398ns/loop
  • concat4: 393ns/loop

From this benchmark, we can see that concat1, which uses +, is significantly faster than any of the others.  Which is a bit sad, given how much I love to use str.format — but it also means that if I’m doing tons of string processing, I should stick to +, which might have less power, but is far faster.

The thing is, the above benchmark might be a bit problematic, because we’re using short strings.  Very short strings in Python are “interned,” meaning that they are defined once and then kept in a table so that they need not be allocated and created again.  After all, since strings are immutable, why would we create “abc” more than once?  We can just reference the first “abc” that we created.

This might mess up our benchmark a bit.  And besides, it’s good to check with something larger. Fortunately, we used global variables — so by changing those global variables’ definitions, we can run our benchmark and be sure that no interning is taking place:

x = 'abc' * 10000 
y = 'def' * 10000

Now, when we benchmark our functions again, here’s what we get:

  • concat1: 2.64µs/loop
  • concat2: 3.09µs/loop
  • concat3: 3.33µs/loop
  • concat4: 3.48µs/loop

Each loop took a lot longer — but we see that our + operator is still the fastest.  The difference isn’t as great, but it’s still pretty obvious and significant.

What about if we no longer use global variables, and if we allocate the strings within our function?  Will that make a difference?  Almost certainly not, but it’s worth a quick investigation:

def concat1(): 
     x = 'abc' * 10000 
     y = 'def' * 10000 
     z = x + y 
     return z 

def concat2(): 
     x = 'abc' * 10000 
     y = 'def' * 10000 
     z = "%s%s" % (x, y) 
     return z 

def concat3(): 
     x = 'abc' * 10000 
     y = 'def' * 10000 
     z = "{}{}".format(x, y) 
     return z 

def concat4(): 
     x = 'abc' * 10000 
     y = 'def' * 10000 
     z = "{0}{1}".format(x, y) 
     return z 

And our final results are:

  • concat1: 4.89µs/loop
  • concat2: 5.78µs/loop
  • concat3: 6.22µs/loop
  • concat4: 6.19µs/loop

Once again, we see that + is the big winner here, but (again) but less of a margin than was the case with the short strings.  str.format is clearly shorter.  And we can see that in all of these tests, the difference between “{0}{1}” and “{}{}” in str.format is basically zero.

Upon reflection, this shouldn’t be a surprise. After all, + is a pretty simple operator, whereas % and str.format do much more.  Moreover, str.format is a method, which means that it’ll have greater overhead.

Now, there are a few more tests that I could have run — for example, with more than two strings.  But I do think that this demonstrates to at least some degree that + is the fastest way to achieve concatenation in Python.  Moreover, it shows that we can do simple benchmarking quickly and easily, conducting experiments that help us to understand which is the best way to do something in Python.

May 11, 2016 , bY reuven
China

I run a side project that has nothing to do with computers or programming: Every Monday, I publish “Mandarin Weekly,” a curated collection of links and resources for people learning Chinese.  I’ve been studying Chinese for nearly two years now, and it is one of the most interesting and fun (and challenging!) things I’ve ever done.

Mandarin Weekly is running a giveaway for six months of free Yoyo Chinese, a great online school that teaches Chinese vocabulary, grammar, pronunciation, listening comprehension, and even reading characters.  On Sunday, we’ll be giving away two premium six-month memberships to Yoyo Chinese, each worth $100.

If you’ve always wanted to learn Chinese, then this is a great way to do it.  And if you are studying Chinese, then Yoyo is a great way to supplement or improve on your formal classroom studies.  Indeed, I think that anyone learning Chinese can benefit from this course.  To enter the giveaway, just sign up here:

http://mandarinweekly.com/giveaways/win-a-6-month-premium-membership-to-yoyo-chinese-a-99-value/

But wait, it gets better: If you enter the giveaway, you get one chance to win.  For every friend you refer to the giveaway, you get an additional three chances.  So if you enter, and then get five friends to enter, you will have 16 chances to win!

And now, back to your regularly scheduled technical blog…

April 4, 2016 , bY reuven
Python, Ruby, Training

The last Webinar I did, with Q&A about regular expressions, was great fun — so much, that I’ve decided to do another one.

So, if you have questions (big or little) about regular expressions in Python, Ruby, JavaScript, and/or PostgreSQL, sign up for this free Webinar on Monday, April 11th: https://www.crowdcast.io/e/regexpqa2

If you already have questions, you can leave them in advance using the Crowdcast Q&A system.  (Or just surprise me during the Webinar itself.)

I look forward to seeing you there!

March 14, 2016 , bY reuven
Python, Training

practice-makes-regexp-coverTo celebrate the publication of my new ebook, Practice Makes Regexp, my upcoming Webinar (on March 22nd) is all about regular expressions (“regexps”) in Python, Ruby, JavaScript, and PostgreSQL, as well as the Unix “grep” command.

Unlike previous Webinars, in which I gave a presentation and then took Q&A, this time will be all about Q&A: I want you to come with your questions about regular expressions, or even projects that you’re wondering how to attack using them.

I’ll do my best to answer your questions, whether they be about regexp syntax, differences between implementations and languages, how to debug hairy regexps, and even when they might not be the most appropriate tool for the job.

Please join me on March 22nd by signing up here:

http://ccst.io/e/regexpqa

And when you sign up, please don’t forget to ask a question or two!  (You can do that it advance — and doing so will really help me to prepare detailed answers.)

I look forward to your questions on the 22nd!

Reuven

March 4, 2016 , bY reuven
Education, PostgreSQL, Python, Ruby, Training

Announcing: My new book, “Practice Makes Regexp,” with 50 exercises meant to help you learn and master regular expressions. With explanations and code in Python, Ruby, JavaScript, and PostgreSQL.

I spend most of my time nowadays going to high-tech companies and training programmers in new languages and techniques. Actually, many of the things I teach them aren’t really new; rather, they’re new to the participants in my training. Python has been around for 25 years, but for my students, it’s new, and even a bit exciting.

I tell participants that my job is to add tools to their programming toolbox, so that if they encounter a new problem, they’ll have new and more appropriate or elegant ways to attack and solve it. Moreover, I tell them, once you are intimately familiar with a tool or technique, you’ll suddenly discover opportunities to use it.
Earlier this week, I was speaking with one of my consulting clients, who was worried that some potentially sensitive information had been stored in their Web application’s logfiles — and they weren’t sure if they had a good way to search through the logs.

 

I suggested the first solution that came to mind: Regular expressions.

Regular expressions are a lifesaver for anyone who works with text.  We can use them to search for patterns in files, in network data, and in databases. We can use them to search and replace.  To handle protocols that have changed ever so slightly from version to version. To handle human input, which is always messier than what we get from other computers.

Regular expressions are one of the most critical tools I have in my programming toolbox.  I use them at least a few times each day, and sometimes even dozens of times in a given day.

So, why don’t all developers know and use regular expressions? Quite simply, because the learning curve is so steep. Regexps, as they’re also known, are terse and cryptic. Changing one character can have a profound impact on what text a regexp matches, as well as its performance. Knowing which character to insert where, and how to build up your regexps, is a skill that takes time to learn and hone.

Many developers say, “If I have a problem that involves regular expressions, I’ll just go to Stack Overflow, where my problem has likely been addressed already.” And in many cases, they’re right.

But by that logic, I shouldn’t learn any French before I go to France, because I can always use a phrasebook.  Sure, I could work that way — but it’s far less efficient, and I’ll miss many opportunities that would come my way if I knew French.

Moreover, relying on Stack Overflow means that you never get a full picture of what you can really do with regular expressions. You get specific answers, but you don’t have a fully formed mental model of what they are and how they work.

But wait, it gets worse: If you’re under the gun, trying to get something done for your manager or a big client, you can’t spend time searching through Stack Overflow. You need to bring your best game to the table, demonstrating fluency in regular expressions.  Without that fluency, you’ll take longer to solve the problem — and possibly, not manage to solve it at all.

Believe me, I understand — my first attempt at learning regular expressions was a complete failure. I read about them in the Emacs manual, and thought to myself, “What could this seemingly random collection of characters really do for me?”  I ignored them for a few more years, until I started to program in Perl — a language that more or less expected you to use regexps.

So I spent some time learning regexp syntax.  The more I used them,  the more opportunities I found to use them.  And the more I found that they made my life easier, better, and more convenient.  I was able to solve problems that others couldn’t — or even if they could, they took much longer than I did.  Suddenly, processing text was a breeze.

I was so excited by what I had learned that when I started to teach advanced programming courses, I added regexps to the syllabus.  I figured that I could figure out a way to make regexps understandable in an hour or two.

But boy, was I wrong: If there’s something that’s hard for programmers to learn, it’s regular expressions.  I’ve thus created a two-day course for people who want to learn regular expressions.  I not only introduce the syntax, but I have them practice, practice, and practice some more.  I give them situations and tasks, and their job is to come up with a regexp that will solve the problem I’ve given them.  We discuss different solutions, and the way that different languages might go about solving the problem.

After lots of practice, my students not only know regexp syntax — they know when to use it, and how to use it.  They’re more efficient and valuable employees. They become the person to whom people can turn with tricky text-processing problems.  And when the boss is pressuring them for a

ImageAnd so, I’m delighted to announce the launch of my second ebook, “Practice Makes Regexp.”  This book contains 50 tasks for you to accomplish using regular expressions.  Once you have solved the problem, I present the solution, walking you through the general approach that we would use in regexps, and then with greater depth (and code) to solve the problem in Python, Ruby, JavaScript, and PostgreSQL.  My assumption in the book is that you have already learned regexps elsewhere, but that you’re not quite sure when to use them, how to apply them, and when each metacharacter is most appropriate.

After you go through all 50 exercises, I’m sure that you’ll be a master of regular expressions.  It’ll be tough going, but the point is to sweat a bit working on the exercises, so that you can worry a lot less when you’re at work. I call this “controlled frustration” — better to get frustrated working on exercises, than when the boss is demanding that you get something done right away.

Right now, the book is more than 150 pages long, with four complete chapters (including 17 exercises).  Within two weeks, the remaining 33 exercises will be done.  And then I’ll start work on 50 screencasts, one for each of the exercises, in which I walk you through solutions in each of Python, Ruby, JavaScript, and PostgreSQL.  If my previous ebook is any guide, there will be about 5 hours (!) of screencasts when I’m all done.

If you have always shied away from learning regular expressions, or want to harness their power, Practice Makes Regexp is what you have been looking for.  It’s not a tutorial, but it will help you to understand and internalize regexps, helping you to master a technology that frustrates many people.

To celebrate this launch, I’m offering a discount of 10%.  Just use the “regexplaunch” offer code, and take 10% off of any of the packages — the book, the developer package (which includes the solutions in separate program files, as well as the 300+ slides from the two-day regexp course I give at Fortune 100 companies), or the consultant package (which includes the screencasts, as well as what’s in the developer package).

I’m very excited by this book.  I think that it’ll really help a lot of people to understand and use regular expressions.  And I hope that you’ll find it makes you a more valuable programmer, with an especially useful tool in your toolbox.

March 1, 2016 , bY reuven
PostgreSQL

After months of writing, editing, and procrastinating, my new ebook, “Practice Makes Regexp” is almost ready.  The book (similar to my earlier ebook, “Practice Makes Python“) contains 50 exercises to improve your fluency with regular expressions (“regexps”), with solutions in Python, Ruby, JavaScript, and PostgreSQL.

When I tell people this, they often say, “PostgreSQL?  Really?!?”  Many are surprised to hear that PostgreSQL supports regexps at all.  Others, once they take a look, are surprised by how powerful the engine is.  And even more are surprised by the variety of ways in which they can use regexps from within PostgreSQL.

I’m thus presenting an excerpt from the book, providing an overview of  PostgreSQL’s regexp operators and functions. I’ve used these many times over the years, and it’s quite possible that you’ll also find them to be of assistance when writing queries.

PostgreSQL

PostgreSQL isn’t a language per se, but rather a relational database system. That said, PostgreSQL includes a powerful regexp engine.  It can be used to test which rows match certain criteria, but it can also be used to retrieve selected text from columns inside of a table.  Regexps in PostgreSQL are a hidden gem, one which many people don’t even know exists, but which can be extremely useful.

Defining regexps

Regexps in PostgreSQL are defined using strings.  Thus, you will create a string (using single quotes only; you should never use double quotes in PostgreSQL), and then match that to another string. If there is a match, PostgreSQL returns “true.”

PostgreSQL’s regexp syntax is similar to that of Python and Ruby, in that you use backslashes to neutralize metacharacters. Thus, + is a metacharacter in PostgreSQL, whereas \+ is a plain “plus” character. However, there are differences between the regexp syntax for example, PostgreSQL’s word-boundary metacharacter is \y whereas in Python and Ruby, it is \b.  (This was likely done to avoid conflicts with the ASCII backspace character.)

Where things are truly different in PostgreSQL’s implementation is the set of operators and functions used to work with regexps. PostgreSQL’s operators are generally aimed at finding whether a particular regexp matches text, in order to include or exclude result rows from an SQL query.  By contrast, the regexp functions are meant to retrieve some or all of a string from a column’s text value.

True/false operators

PostgreSQL comes with four regexp operators. In each case, the text string to be matched should be on the left, and the regexp should be on the right.  All of these operators return true or false:

  • ~  case-sensitive match
  • ~*  case-insensitive match
  • !~  case-sensitive non-match
  • !~* case-insensitive non-match

Thus, you can say:

select 'abc' ~ 'a.c';   -- returns "true"
select 'abc' ~ 'A.C';   -- returns "false"
select 'abc' ~* 'A.C';  -- returns "true"

In addition to the standard character classes, we can also use POSIX-style character classes:

select 'abc' ~* '^[[:xdigit:]]$';    -- returns "false"
select 'abc' ~* '^[[:xdigit:]]+$';   -- returns "true"
select 'abcq' ~* '^[[:xdigit:]]+$';  -- returns "false"

This operator, as mentioned above, is often used to include or exclude rows in a query’s WHERE clause:

CREATE TABLE Stuff (id SERIAL, thing TEXT);
INSERT INTO Stuff (thing) VALUES ('ABC'), ('abc'), ('AbC'), ('Abq'), ('ABCq');
SELECT id, thing FROM Stuff WHERE thing ~* '^[abc]{3}$';

This final query should return three rows, those in which thing is equal to abc, Abc, and ABC.

Extracting text

If you’re interested in the text that was actually matched, then you’ll need to use one of the built-in regexp functions that PostgreSQL provides. For example, the regexp_match function allows us not only to determine whether a regexp matches some text, but also to get the text that was matched.  For each matching column, regexp_match returns an array of text (even if that array contains a single element).  For example:

CREATE TABLE Stuff (id SERIAL, thing TEXT);
INSERT INTO Stuff (thing) VALUES ('ABC'), ('abc'), ('AbC'), ('Abq'), ('ABCq');
SELECT regexp_matches(thing, '^[abc]{3}$') FROM Stuff;

The above will return a single row:

{abc}

As you can see, the above returned only a single column (from the function) and a single row (i.e., the one matching it).  That’s because when you invoke regexp_matches, you can provide additional flags that modify the way in which it operates. These flags are similar to those used in Python, Ruby, and JavaScript.

For example, we can use the i flag to make regexp_match case-insensitive:

CREATE TABLE Stuff (id SERIAL, thing TEXT);
INSERT INTO Stuff (thing) VALUES ('ABC'), ('abc'), ('AbC'), ('Abq'), ('ABCq');
SELECT regexp_matches(thing, '^[abc]{3}$', 'i') FROM Stuff;

Now we’ll get three rows back, since we have made the match case-insensitive.  regexp_matches can take several other flags as well, including g (for a global search). For example:

CREATE TABLE Stuff (id SERIAL, thing TEXT);
INSERT INTO Stuff (thing) VALUES ('ABC');
SELECT regexp_matches(thing, '.', 'g') FROM Stuff;

Here is the output from regexp_matches:

{A} 
{B} 
{C}

Notice how regexp_matches, because of the g option, returned three rows, with each row containing a single (one-character) array. This indicates that there were three matches.

Why is each returned row an array, rather than a string? Because if we use groups to capture parts of the text, the array will contain the groups:

CREATE TABLE Stuff (id SERIAL, thing TEXT);
INSERT INTO Stuff (thing) VALUES ('ABC'), ('AqC');
SELECT regexp_matches(thing, '^(A)(..)$', 'ig') FROM Stuff;

Notice that in the above example, I combined the i and g flags, passing them in a single string.  The result is a set of arrays:

| regexp_matches |
|----------------|
| {A,BC}         |
| {A,qC}         |

Splitting

A common function in many high-level languages is split, which takes a string and returns an array of items. PostgreSQL offers this with its split_part function, but that only works on strings.

However, PostgreSQL also offers two other functions: regexp_split_to_array and regexp_split_to_table. This allows us to split a text string using a regexp, rather than a fixed string.  For example, if we say:

select regexp_split_to_array('abc def   ghi   jkl', '\s+');

The above will take any length of whitespace, and will use that to split the columns.  But you can use any regexp you want to split things, getting an array back.

A similar function is regexp_split_to_table, which returns not a single row containing an array, but rather one row for each element. Repeating the above example:

select regexp_split_to_table('abc def   ghi   jkl', '\s+');

The above would return a table of four rows, with each split text string in its own row.

Substituting text

The regexp_replace function allows us to create a new text string based on an old one.  For example:

SELECT regexp_replace('The quick brown fox jumped over the lazy dog',
                      '[aeiou]', '_');

The above returns:

Th_ quick brown fox jumped over the lazy dog

Why was only the first vowel replaced? Because we didn’t invoke regexp_replace with the g option, making it global:

SELECT regexp_replace('The quick brown fox jumped over the lazy dog',
                      '[aeiou]', '_', 'g');

Now all occurrences are replaced:

Th_ q__ck br_wn f_x j_mp_d _v_r th_ l_zy d_g

February 11, 2016 , bY reuven
Python

I’m delighted to announce that I’ve completed a screencast for every single one of the 50 exercises in my ebook, “Practice Makes Python.”  This is more than 300 minutes (5 hours!) of Python instruction, helping you to become a more expert Python programmer.

Each screencast consists of me solving one of the exercises in real-time, describing what I’m doing and what I’m doing it.   They range in length from 4 to 10 minutes.  The idea is that you’ll do the exercise, and then watch my video to compare your answer (and approach) with mine.

If you enjoy my Webinars or in-person courses, then I think you’ll also enjoy these videos.

The screencasts, available with the two higher-tier “Practice Makes Python” packages,  can be streamed in HD video quality, or can be downloaded (DRM-free) to your computer for more convenient viewing.

To celebrate finally finishing these videos, I’m offering the two higher-end packages at 20% off for the coming week, until February 18th. Just use the offer code “videodone” with either the “consultant” or “developer” package, and enjoy a huge amount of Python video.

You can explore these packages at the “Practice Makes Python” Web site.

Not interested in my book, but still want to improve your Python skills?  You can always take one of my two free e-mail courses, on Python variable scoping and working with files. Those are and will remain free forever. And of course, there’s my free Webinar on Python and data science next week.

February 3, 2016 , bY reuven
Python

It’s time for another free hour-long Webinar! This time, I’ll be talking about the increasingly popular tools for data science in Python, namely Pandas and Matplotlib. How can you read data into Pandas, manipulate it, and then plot it? I’ll show you a large number of examples and use cases, and we’ll also have lots of time for Q&A. Previous Webinars have been lots of fun, and I expect that this one will be, too!

Register (for free) to participate here:

https://www.eventbrite.com/e/analzying-and-viewing-data-with-pandas-and-matplotlib-tickets-21198157259

If you aren’t sure whether you’ll be able to make it, you can still sign up; I’ll be sending information, and a URL with the recording afterwards, soon after the Webinar concludes.

I look forward to seeing you there; if you have any questions, please feel free to contact me at reuven@lerner.co.il or on Twitter as @reuvenmlerner.